← Previous · All Episodes
A Visit with Andrew Thomson Episode 33

A Visit with Andrew Thomson

· 43:37

|

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: All right.

Welcome everybody back to in the interim.

I'm Scott Berry.

I'm your host, uh, today and I
have, uh, a, a guest today, a really

interesting guest today, Andrew Thompson.

Andrew is the owner and
lean lead consultant of cio.

And he's gonna help me if I, if I get
the name wrong, but he is, um, he is

freshly, um, uh, started this consulting
company and he's a lead consultant

for the consulting company and he is
finishing 11 years at EMA and seven

years before that he was also a regulator
in the uk and so he's 18 years of.

Quote unquote, being a regulator.

Um, and so I, it, it's a
wonderful conversation.

Uh, so I'd like to talk about life as
a regulator and, and what that is and,

and moving into a consulting space now,
but welcome to, in the interim, Andrew.

Andrew Thomson: Thanks Scott and.

Thanks for having me.

Scott Berry: So let's, I, I, I,
let's, let's talk a little bit

about, uh, how you got there, how
you got to EMA, uh, a little bit

perhaps about your education and, and
that, and becoming a statistician.

So, first of all, did, where,
where, where did you grow up?

Andrew Thomson: Um, I grew up just north
of London, um, in a, in a small town

called where, um, it's got a, a large.

Uh, plant of GSK, um, they
discovered Ventolin, uh, sobut.

ATU had been made there for many years.

Oily and inhale products.

Um, and at school I was
quite good at maths.

Um, but I wanted to be a doctor.

Scott Berry: Mm-hmm.

Andrew Thomson: and in the UK
you specialize pretty early.

At 16, you pick three or four subjects
that you want to study, um, and, and then

specialize quite early in college as well.

And, and I wanted to do medicine,
but I realized quite early on that I

was probably too squeamish for that.

Um, and that kind of forced me more
towards, uh, doing a maths degree.

And I was always very good at maths.

Um, and I went to university and I
found that I'd, I'd hit my ceiling.

I wasn't very good at pure maths.

I really wasn't very good at applied
maths, but I managed to be at least

possible in applicable maths, in
probability statistics, uh, and

associated disciplines like that.

So I kind of ended up specializing
statistics simply so that I

could actually pass my degree.

Um, and, but I, but I was quite
good at statistics nevertheless.

Scott Berry: Yeah.

Yep.

Uh, and this was, this was Cambridge,

uh, for, for undergrad?

Yeah,

Andrew Thomson: is
Cambridge for undergrad.

And actually in, in the, in, in my last
year, the, the lecturer in the statistical

modeling course I was doing up, put up
a, a slide that said that Glaxo, welcome,

we're going to fund the master's places.

In, uh, medical statistics
courses around the uk and I didn't

really know what I wanted to do.

I still had that interest in medicine.

And, and this seemed great because someone
was gonna, you know, pay my tuition

fees and give me a stipend to live on.

Um, and I could be a student for another
year whilst I figured things out.

So I went off.

I was very fortunate, um, to, to get that.

I had a, a really fun interview at Glaxo.

I very much enjoyed.

Um, and then off to
Southampton to do my masters.

Um, yeah.

Scott Berry: Okay,

Andrew Thomson: Um,

Scott Berry: so, so South
Hampton to do your master's,

Andrew Thomson: then at the end of that,

Scott Berry: there, the decision,

Andrew Thomson: after you.

So I still didn't know what I
wanted to do at the end of that.

So that was, uh, that was the challenge
to know what it was I wanted to do.

So I took a little bit of time out
at the end of that as well, um,

and gravitated towards academia.

Uh, and I actually started as a Bayesian
statistician and a very Bayesian

at, uh, Imperial College under the,
under the watch fly of, of Sylvia

Richardson, who's a, you know, a, a
true grade of Bayesian statistics.

Uh, incredibly fortunate to have that
opportunity, but I soon realized that

if I wanted to progress as an academic,
I would probably need to get a PhD.

Uh, it's the kind of an
entry level requirement.

So I, I moved, um, across London to, to
do my PhD in cluster randomized trials.

Yeah,

Scott Berry: so London School of
Hygiene and Tropical Medicine,

uh, where you described, uh,
uh, working on cluster trials.

Have you been able to
work on cluster trials?

I know you spent a number
of years epidemiology.

And, and regulatory.

Have you done much in
cluster trials since then?

Andrew Thomson: no, not really.

Um, I'll, maybe I'll come to that later
when I'll talk about the, you know, kind

of special regulatory use cases for those.

Um, but it is quite niche.

Um, but, but nevertheless, it was, it
was definitely back to, to randomization

and, and for tism rather than, than
basic statistics and observational data.

But, you know.

Sometimes the, these cluster randomized
trials are actually, when there

are a small number of clusters,
they, they're kind of a, a bridge

between epidemiology and, and, and,
and, and randomized data sometimes.

Um, and, and again, when I
finished that, I still didn't

really know what I wanted to do.

Um, and so I, um, applied for some
jobs, one of which was a regulator.

And actually, I, I.

Met the, the head of statistics unit at
the time, at a conference the year before,

and I'd spoken to him, so I had a little
bit of an inkling about what it might

involve, but I was pretty naive to it.

Um, and, um, I applied for a
few jobs and I was offered the

regulatory one and I decided that
I'd, uh, I'd take a chance on that.

And

I guess that it suited me
because I managed 18 years there.

Um.

There was no strategic
grand plan to my career.

I didn't have anything like a,
you know, in five years time I

want to do or accomplish this.

And, and pretty much I kind of fell
into it, but in a very measured way.

And I always had agency over my
choice and I hadn't ever actually done

something that I didn't want to do.

But, but I'm, I was not a, a
kind of grand career plan person.

Scott Berry: Yeah.

Yeah.

Uh, and, and we'll come back to that
because you're, you're now making

more decisions about your career and,

um, you have a grand plan now, but

let's, let's sort of jump
into this point where

you jump in regulatory.

Uh, EMA, um, uh, within this, so
you've been at EMA for 11 years.

So by, by the way, did the whole
Brexit and the EMA moving from London

to, uh, the Netherlands, did you,
did you geographically move to the

Andrew Thomson: So yes, so we moved from,
from the UK to, uh, to Amsterdam and I

actually, I acquired French citizenship
through my wife, which has made life,

uh, a lot easier for me and my family.

Um,

Scott Berry: Ah, Okay.

Okay.

So you currently reside, uh, in Amsterdam.

Andrew Thomson: So I currently reside
in Amsterdam and, and, you know, want

to stay here and perhaps maybe why
consultancy was the next step for me was

that, you know, we're happy here from a
kind of personal point of view and, and

it, it's a good opportunity to be able
to, to work from where I want to work.

Scott Berry: Uh huh.

Yeah.

Very nice.

Very nice.

Uh, okay.

Um, so, so life as a regulator.

Um, I, I've, I've, I've been
doing this for 25 years.

I've spent a lot of times, and, and I
don't even like the aspect on the opposite

side of the table of regulators, I think
it's almost too much, too combative,

but spent a lot of time working with
pharma, building trials, working with

other academic groups and all of that.

Uh, what's life like as a rake?

Later.

So you're a statistician at the EMA, uh,

and I know there's certain areas
you can and can't talk to, but

just generally, what is life like
as a statistician at the EMA?

Andrew Thomson: Uh, well, I've
held many different positions.

I actually started at the UK regulators,
which is the MHRA before, and I was

a, a statistical assessor there and,
and as the kind of the equivalent US

panels would be a statistical reviewer.

And mostly that's kind of what you do
is, is reading clinical study reports,

thinking about what's been written
and then writing your opinion on them.

It's.

referred to as quiet, contemplative work.

And then once you've done that, you
then have to work in a multidisciplinary

team with other people who also like
to spend their time doing that quiet

contemplative work and come up with a,
with a joint piece of work, particularly

with, with medical assessors once
you've done that back in the day,

um, at the MHRA, there was the, the
Commission of Human Medicines, which

is uh, you know, kind of the great
and the good 20 expert academics.

And every month you
would have to go there.

It was in person at the time, very
formal suit and tie type occasion.

It's basically an advisory
committee and you have to.

Sell your assessment report and justify
why your take on a clinical trial is the

and development program's the correct one.

So being able to do your job well
requires you to be able to do all three.

The kind of somewhat introverted, quiet
contemplative work, the, the teamwork,

but also then on top of that as well, kind
of going out to a team of global experts,

you know, they are not your peers.

Um, and, and convincing them
of, of the, of the rightness

of your assessment report.

So it's quite a rare breed
of people who actually.

Uh, can do all of those without
stepping outside their comfort zone.

And I think most regulators do.

Um, I moved sideways and upwards to
be head of epidemiology there, and

that was a little bit different.

I was, you know, I was managing
people now it was post authorization.

Um, so it was more observational studies,
safety studies, but also meta-analysis.

And in my team I had data scientists,
epidemiologists and statisticians

who were, who were doing the work,
but also they were doing their own

studies using databases that were
held in house, particularly, uh, the

clinical practice research data link,
which is a UK primary care database.

And, and, and using that to do studies
to inform regulatory decision making.

I enjoyed that, but then I moved
over to EMA 11 years ago and back to

statistics and more kind of, you know,
kind of, uh, phase three studies,

uh, kind of point of licensure, but
also, uh, scientific advice as well.

Um, and, and.

EMA is not the same as FDA, um, and the
the best way I've found to describe it

to, to audiences who are familiar with
FDA but not EMA, is to think of it more in

terms of every single state has their own
FDA and each month they go to Washington

DC and make a coordinated decision
that applies to everybody in the us.

So it's, it's a hub.

There's coordination, there's a lot
more peer review and un laterally, I was

doing a lot more methodological strategy.

A big project on, on re reorganizing
how, uh, methodological advice in the

system works in general through the,
the working parties reorganization.

Um, I've always been fortunate, uh,
position at EMOI was very lucky.

I was given big projects that I
enjoyed the methodology working party.

I just mentioned CIC work,
but in particular ICH.

Uh, I was initially on ICHE 11 A, which is
pediatric extrapolation as a statistician,

and then subsequently on ICH.

E six, annex two, which is GCP
for clinical trials that utilize.

Real world evidence and decentralized
elements and pragmatic elements.

And ICH again is a different
set of skills you need.

There's more kind of
personality management.

I was what's called the regulatory
chair who, whose job it is to try

and facilitate and harmonize between
differences between regulatory regions.

You need drafting skills.

Being able to find that, that one
word or that one phrase that is

encapsulates the, the concept that
everyone can actually buy into.

And you also have kind of cross-cultural
awareness and appreci appreciation.

Um, and I really enjoyed my time at
ICHI have very fond memories of it.

Um, particular a particular good
one is, uh, you, you go to a hotel

somewhere in the world and you sit
in a hotel conference room for.

For four or five days there early
starts, their late finishes, and

then one evening you go out to the
team and you have a team dinner and,

and, and they're always good fun.

And, and, and learning Japanese tongue
twisters from colleagues at PNDA was,

uh, a particular thing I would never
have had an opportunity to do otherwise.

Um,

Scott Berry: Fantastic.

Andrew Thomson: I think also my
time at EMA HA has the pandemic

in the middle of it as well.

And that's always going to be a
lens that I view that through.

Um, I, like I said, I did my, my
PhD in cluster randomized trials,

particularly the idea for HIV and
tuberculosis trials that were being run

out of the London School of Hygiene.

And Johns Hopkins.

And, uh, so I had this kind of
background in infectious diseases and

one of the differences between Europe
and and the US is that in Europe, the

statisticians, um, don't, uh, have to kind
of work across every therapeutic area.

So you do see all developments
across all areas, whereas FDA,

there are different divisions of
biostatistics within the office there.

Um, and, and of course people move around,
but, but within EMA, you see everything.

Um, and I was actually when, when the
Ebola outbreak happened, um, because

of, I was, I was in the office and I had
a, some background infectious diseases.

I'd worked with people, uh, in, um,
in the infectious diseases team at

EMA and across, across Europe as well.

So we'd had quite a lot
of learnings from that.

So when the pandemic hit, I already knew.

Knew the people involved
within the system.

And, and that kind of meant that I
was heavily involved in, in the work,

particularly in the early stages.

And everyone wanted scientific advice.

Everyone wanted to try something that
could possibly work, and rightly so.

Um, it was incredibly disruptive in order
to give proper advice, you knew that

time to be able to read proposals, to
digest proposals and, and to come up with

a sensible plan of action for people.

And there were, there
were just so many there.

Um.

And also the kind of the
societal expectation as well.

ENA was an a as a, an agency
in his organization was not

particularly well known.

Most of my friends hadn't heard of it,
and then suddenly it was front page

globally, uh, on, on certain days.

So incredibly dis uh, a disruptive time
on top of the Brexit, uh, move as well.

Scott Berry: Yeah.

Yeah.

And actually UK was, I mean, fantastic
in terms of, uh, MHRA, in terms of

the number of patients they were able
to put in trials, recovery, remap,

cap, and all these platform trials.

Uh, uh, a bit of.

Expansion in clinical trial science,
um, through COVID as well with

platform trials and, uh, uh, the
need of these in therapeutic areas.

So I, I, in a, in a lot of ways, I
think the science of clinical trials

were expanded as well as the, the
impact of the regulators was fantastic.

Andrew Thomson: Yeah, for sure.

For

Scott Berry: So, yeah.

So at EMA, um, when your, your, your
work as a statistician, you de you

described the various things and the,
the impact on ICH and these other things.

I think we'll come back to that.

How much time do you spend.

Reviewing trials that are, are going to
be conducted and providing advice on that

as opposed to reviewing trials that have
been conducted in the data and coming

up with decisions about should this be
approvable, should this be approved?

Andrew Thomson: Well, the European
Medicines Agency system as a

whole involves all of the, the
individual regulators from.

Each country as well.

So, um, it depends entirely on, on,
on countries who, uh, bid to be the

Rapporteur as it's called in Europe
in terms of the lead country who's

going to be doing the assessment.

So it's very much driven by an individual
country, uh, case by case on that I think.

There, there is more scientific advice
than there is assessment work, because

inevitably things don't make it
through phase three at the end of it.

Um, so the, the total number of
procedures is always going to be higher

for scientific advice, but the, the
scientific content of a briefing document.

For scientific advice, procedures
is nowhere near the content

of a clinical study report.

And so, um, the both, both of
those things are important to the

functioning of the regulatory system.

Um, and, and, you know, there is, there
is less assessment work, but it takes

more time and the em a's role in that.

I mean, I, I can't speak for MA I'm, I'm
not a regulator anymore, but it, it is

a, a, an institution that, that ensures
that a, a harmonized position is, uh.

A a time across Europe.

Scott Berry: Hmm.

Yeah.

Yeah.

Uh, uh, challenging.

Um, so the, the, um, the the role of
ICH, um, this, I incredibly hard working

on these guidance doc, and it sounds
like a lot of you, you've used the word

sort of harmonization and working, uh,
amongst, uh, uh, uh, uh, a, a group.

It seems like a huge amount
of your work is about.

Harmonization, as you described, finding
the right word that represents and doesn't

over represent sort of thing in all of
this, um, the, the, the ICH and you,

you had, you do have involvement in E 20
and I know it's fresh, it's in draft and

probably don't want to talk about much
of the details of this, um, but it sounds

like it, at least in EMA and it may be a
little bit different at the FDA in terms

of harmonization and it feels different.

So when we, when we go to EMA.

You might go to the scientific advice
working party and you get advice,

and it can be almost different among
people sitting on that, that committee.

But when you go to the FDA, you get
more of sort of a single voice and

maybe the structure is is that way.

So it sounds like a huge amount of those
18 years is working and harmonizing

a, a, a ton in terms of, of advice.

Andrew Thomson: Uh, yeah, I mean, I, I, I,
I wouldn't comment on, you know, kind of.

Differences between, between EMN and FDA.

But I think, you know, in within the
spirit of the podcast title and everything

else being equal, um, if you, if you
think about the trial where everything

else is, is proposed, it's acceptable and
there's a trial with no interim analysis.

And I think most people
would be happy with that.

Um, and you know, I, I can't speak
for regulators, but I can say

personally that I do not know any
regulator who would be happy with

the trial that had an adaptation
planned after every single patient.

So sometimes, um, the question arises,
you know, how many looks is okay.

There's a number somewhere
between zero and N, um, where

questions start to be asked.

Now, this is a value judgment
that would depend on the clinical

context, and so it's not always
possible to give definitive answers.

And so you will get variable
answers depending on who you ask.

Whether a trial with say seven looks as
okay, or whether four or whether two,

and whenever there's a value judgment
involved, I think diversity of opinions

is inevitable and they will arise,
and not everything can be written down

as a rule, especially numerically.

Now I learned how to
be a regulator at MHRA.

Uh, there were amazing team of four other
statisticians, all of whom I learned

so much from in many different ways.

Now, it would be surprising though, if
I did not have similar opinions to them

on acceptability of certain approaches.

So we wouldn't always agree on
everything, and it's not that

you would always get a harmonize.

Voice, but in general, it's reasonable
to assume that there'd be some sort

of correlation between my opinion
at MHRA as a regulator and the

other people in the statistics team.

Now one of the, the things I learned
in my PhD that the, the presence of

within cluster correlation implies
and is implied by the presence of

between cluster variability as well.

So whenever you have that correlation
within any particular organization or

region, that does mean definitionally,
you will get variance between

different regions as well, or
different clusters, so to speak.

Um.

I think another example I could
give is that, and, and you've heard

me talk on this before Scott, but
there are potentially at least

two schools of thought about
the control of type one error.

So any method that controls type
one error is fine and can be

acceptable as a school of thought.

IE type one error control
is necessary and sufficient.

And then another school of thought is that
versus, uh, the absence of any obvious

benefit of one method over another.

Uh, if they both control the type one
error, then the simplest one is preferred.

Um, and so therefore, type one error
control is not, uh, sufficient.

It's necessary.

Now it is likely that decision makers
of all stripes and statisticians

of all stripes globally will
hold different views on this both

within and uh, between regions.

And that's another reason why
harmonization can be hard as well,

because people come from different
schools of thought about how they

think about things scientifically.

And I don't, and I don't think that's
necessarily a regulatory, I think that's

also relates to the training we all have
as statisticians during our education.

Scott Berry: Hmm.

So I want to touch on this a little bit.

So I, I, people tuning in may want to
hear Scott and a regulator going at it

on different things, and I, I, but you
know, I, I don't think that's true.

But I think a point where we may differ
a little bit, and you touched on a

little bit, is complexity of trials.

Now by, by its nature, that word
is an awkward word because what

one person thinks is complex.

Another person may think
that's not complex at all.

And so.

You know, too complexes by
definition, bad sort of thing.

So let's, let's talk a
little bit about this.

And so you're, by the way, you're sitting
in the interim, so you're doing an interim

right now, uh, within this, but I think.

Um, sometimes we equate things like number
of interims to complexity, and we've done

trials with 30 interim analysis phase
three trials with 30 interim analysis.

And a lot of it is what do you
do with that interim analysis?

And I, I think if you're, you could
potentially claim success 30 times

throughout the trial, you might have
an issue at that, that it's just barely

getting to some level of success.

But if, if it's futility analyses,
if it's making sure it's the right

patient population, uh, response,
adaptive randomization, overdoses,

selecting these all mean very, very
different things for the trial and

the role of quote unquote complexity.

So one trial of seven interims
might be, have a low level of.

Kind of complexity where another
with seven you might consider, oh,

that's super complex, uh, generally,
and so I, I think sometimes number

of interims get lumped into that.

And I think it's, in many ways
it could be a bad measure of

the complexity of the trial.

Andrew Thomson: Uh, yeah, I mean that,
that's certainly one point of view.

I think my, my take would be that
the, the most interesting designs,

and hence analysis actually happen
at phase II Uh, it is also is when

most development programs end.

Um, and I think the decisions that are
made at phase II are different to those

that made phase III for, for many reasons.

But two I'd like to pick out would be
firstly at the end of phase II And I

think both decisions to, uh, whether to
advance to phase III or not belong to

the, the sponsor, the of, of the product.

Whereas at phase III you've
got these two different groups.

You have the regulators who say we,
we will set alpha, we will set this at

the level we are accept with often 5
And then companies pick their beta And

the sample size that goes with that.

And so there's that kind of like
co-ownership of the decision where

that's not the case for phase I and
phase II and so therefore there is

a subtle difference between decision
making on, on the, the outcomes of that.

What I would also say is that there
is, I still think an interest in

the learn and confirm paradigm, and
yes, it's important that we have

confirmatory studies and yes, we learn
things from those as well in that.

You know, we learn more about safety
because it's in larger numbers.

We learn more about longer term safety
because the treatment's used for longer.

We could learn more about other endpoints,
which might not have been included in

phase II So there are still learnings to
be had from a phase III study, but it's a

confirmatory study within that paradigm.

So it should be fairly boring.

Um, I.

Now.

I would also say that on top of
that, there are some situations

and rare and ultra rare diseases.

is an example, I would say, where
there might not be enough patients

to allow you to be able to be boring.

You actually have to say, well, I'm sorry.

The kind of simple normal clinical
trial isn't going to give us the

information that we need here.

And I think.

Um, that's maybe where, where
the scope for more interesting

designs could be moving forward.

I think in, in terms of platform
trials as well, that that type

one error control is easy.

You have one drug in one trial
and one decision to make from it.

And even then, if you have multiple doses
of the same drug, it's kind of easy 'cause

you can specify an obvious hierarchy.

But as soon as you have two
or more drugs, it's unclear.

What are you even trying
to do Is type one error.

A property of a drug,
a property of a trial.

And how do you square that with ICE nine
and the statement that trials need to

answer a limited number of questions.

So I think moving forward there might
be some tension between guidance being

perceived to need to adapt to more modern
trials, but also maybe trials need to

be designed to be fit for purpose for
all decision makers of all stripes

who have uh, their needs as well.

Um, so.

Moving forward, that's a challenge.

And I think COVID was like a completely
kind of supercharged the platform

trials world simply 'cause there
were these multiple drugs being

repurposed or adapted from existing
clinical development and also the the

patients out there to make it happen.

Particularly for a specific
variance, it was of its time, and

I don't know whether they work
best in these specific situations.

The results generalized to phase three,
and I think the benefits of base here

could be one of the stronger ones because
if you've got so many products and you

are just trying to find one that works,
or two or three that work, being able to

drop products early with strong futility
rules is potentially the best way,

and maybe Bayesian methods could have.

A lot of utility now for a company in
normal phase three development, they

might not want such strong futility rules.

A false negative is very, very expensive.

Scott Berry: Mm-hmm.

Andrew Thomson: Um, whereas in a pandemic
with 5, 10, 15 possible options, all

off pattern, it's a different situation.

So I, I think, you know, we, we have
learned a lot about the science,

but then understanding at which
stage these trial designs might be

fit for purpose and maybe something
like phase one, for example as well.

If you think about oncology product.

They're often used in combination
in, in some oncological indications.

There's quite a few different
classes of medicine.

Now.

What they're going to be used in
combination with, at which doses,

et cetera, is, is actually quite
a difficult question to answer.

And something like a
platform trial might offer.

The ability for a company who owns all,
all the decisions at the end of it, um,

how the, the opportunity to, to plan
how their clinical development program

in general for a particular compound.

Scott Berry: Yeah, so I, I, I think
we're in complete agreement about the,

the, the pandemic and the role of that.

And it's a decision problem, um, you
know, a societal decision problem.

And, and a lot of that was
addressed in that the, the.

They come back to this sort
of simple boring phase three

part of it.

I, a number of scenarios.

I think the, the opposite of that
is not that all the questions

get answered in phase two.

I think ideally this I, I.

In many ways trying to fight
this issue of learn and confirm.

You never know the right
answer, and phase two doesn't

always tell you the right dose.

It doesn't always tell
you the right patient.

We don't, we don't know the answer to
that and a lot of times the, the learn and

confirm forces you to do small phase two.

Because the role of phase
three is, is, is so big.

So entering phase three where maybe
we want to go with two doses 'cause

we're not sure of the two doses, or
we want to do enrichment, we want to

narrow the population in phase three,
we think we know the right answer.

That aspect of being able to do
that in phase three can be sort

of critical to a good phase three.

And what I think happens if somebody says.

It, you know, that's too complex.

You can't do that.

They don't go back and run phase
two to figure out the right dose.

They don't go back and figure
out the right patient population.

They just guess, uh, you know, that,
that, that we think we know the right

answer, and then they run phase three.

It sort of creates this.

Undue risk on the system
when it has to be that way.

So I think there's gonna
be a merging of that.

And a merging of learning, confirm,
and I think that's a bit of the

notion of seamless or learning into
phase two, which makes phase three.

More complex and not quite as simple.

And I think that's where a bit of
the rub happens in the scenario.

And, and the opposite of that
is let's just figure out all the

answers and answer everything.

In phase two, it's not sustainable
anymore in the way the financial,

financial aspects of it.

So they just guess and go into phase
three and then it's, it's sort of worse.

So that's the part where I think
the rub happens with wanting

big, simple phase threes.

Uh, is that we don't know
the right answer usually.

Andrew Thomson: Yeah, quite possibly so.

But I think it does depend on the
nature of any adaptations that have been

planned and the number of them, because
eventually, if they are all enacted,

you are asking yourself the question,
what exactly is this trial confirming?

Scott Berry: Mm-hmm.

Andrew Thomson: think

one of the other problems that we
have is that it, it is generally

considered important that there is
consistency of, um, effect between, um.

Uh, time periods before and
after interim analysis and with.

Two with lots of analysis.

I won't say too many with lots.

Sometimes those time periods are small
and it's difficult to know exactly what's

going on and getting enough information
to be able to make an informed statement

about how much, uh, uh, the, any
difference in effect you might see is,

is due to chance and how much it was due
to a fundamental change in the trial.

Because if the first half of the trial
is just fundamentally different from the

second half of the trial full stop, is
it reasonable to combine them in a, in a.

In a single analysis and
make inference on that.

And actually sometimes you have to draw
a line and say, this is just too much.

Um, I think also the, the, the flip
side is that regulators have to

assess what is in front of them.

Um, and it's easy to say, yeah, well
I wouldn't have done it like that.

But that's not actually an option
you, that there is a dossier that

the decision needs to be made on.

Scott Berry: Yeah.

So that, that, that's fascinating.

And, and so this interesting part of it,
and as, as a regulator, I, I, I don't,

do you, you mentioned this a little bit.

That, and, and I'm usually on the
side where I can, uh, I don't make

designs, but I help, I help sponsors
create designs, and I, here's the

pluses, here's the minus part of it.

But you're in a situation where
a design comes to you and it's,

it's almost like being a well.

I, I, I'll, I'll say it.

You, you correct me, but being a, a
referee for a paper, they write a paper.

Well, I wouldn't have said it that
way, but as a referee do I say, you

need to change the way you say that.

You know, it's not my voice
and all that, you know?

And so when a design comes to
you and you're evaluating it, you

might think, I would not do that.

I don't, I, but then you almost
have to have this thing of.

What's, what's my advice?

Do I say, I think you should
change it, or do you say this

would be acceptable and all that?

It, it's a different role
that I've never been in.

Um, uh, it, it's sort of interesting is
that the sort of case where you have to

evaluate it and you'd love to tweak it,
but you kind of have to say yes, no.

Andrew Thomson: Um, scientific advice
is a process where companies come in

and, and, and seek advice and, and EMA
scientific advice working party give what

they consider to be the the best advice.

For the development at hand.

Um, in my personal experience, that
does include, uh, uh, suggesting that

other approaches should be taken.

Um, and then the companies
may or may not follow that.

And then when they come in for licensure,
at the end, assuming a successful trial,

then that will become an assessment issue.

And I think, you know, it's an
assessment issue is a, is a phrase

that's used, um, globally, um, when it,
when it comes to, to topics like this.

Scott Berry: Hmm.

Uh, and, and so, um, the, you the work
on ICHE 20 and, and the other, the, the

other guidances, um, uh, within that, this
is now in draft form, you were involved

in this, so I, I, I know you can't talk
about this much, but just generally.

Are you hap I, so ICHE 20 is a
draft guidance for adaptive designs

and it's a global harmonization.

So in some ways it, it, it's
supposed to represent, um, at

at least the level of agreement.

It's, it doesn't mean it's, it's,
it's that this is the standard

everywhere and it's equal.

But are you happy with the way,
uh, e uh, E 20 came out the draft?

Andrew Thomson: I'm not really sure
I can answer that, that I'm afraid,

Scott Berry: Oh, okay.

Okay.

Andrew Thomson: but what I will
say is it's important that people

wait for the final guidance.

Um, and, and each region has their
own consultation, uh, procedure.

And I think it's clear, at least in
the EC Europe consultation, that views

from interested parties are important
to shaping the direction of travel.

That's explicitly mentioned
on the EMA website.

So I think that's,

um, that, that's, uh, one for the future.

Scott Berry: Mm-hmm.

Okay, so now you're, you're
moving on from being a regulator,

18 years as a regulator.

Uh, you're, you're changing hats.

Uh, first, first, just the aspect of,
of moving into consultant, the, the,

the creation of this consulting company.

You described a little bit that, uh,
through your education you didn't

know what you wanted to do, and,
and, and you didn't have a plan and

you were, you were doing things.

Uh, how much of a plan do you have here?

Uh, you, you know, tell me about
this decision to become a consultant.

Andrew Thomson: Um, well, um.

I do have a plan.

Yes, for sure.

Um, I think I, I think I, I was very happy
at EMA in general, but I think I, I wanted

a change.

Um, and one of the, one of the problems
I had at EMA is they kept on giving

me interesting work to do, um, and
interesting projects to work on.

It's a hard, not, I'm sure, but you,
you get, you get the feeling after

a while is that I'll never leave
here 'cause they keep giving me work

that I enjoy and want to keep doing.

And, and I'll, I'll get to
retirement age and still be here.

Um, and so I thought, well, maybe.

Now's the time for me to,
to, to do something else.

And I, you know, you sit down
and think about my options,

you know, what am I good at?

What do I have knowledge and experience
of that could be valuable to other

people, and what do I enjoy doing?

And I, I like solving problems.

That's what I do.

Crossword logic problems or
challenges in clinical development.

I, I like solving problems, and
then I thought about the kind

of people I'd like to work with.

I'd like to work with small
biotech with one molecule and

help push it through development.

You know, you could be quite
close to the action there.

I'd also quite like to work with
larger pharma who employ specialist

teams of expert statisticians.

You know, there's so many good
scientific papers that come out from

industry stats teams and, and there's
also a huge amount of due diligence.

That is done not just within the
pharma industry, but also within

private equity and venture capital
and the people financing trials.

And I could potentially support that to,
to help people make better decisions.

'cause to me that's what it's all about.

You know, from, from an assessment team
within a national regulators to all

countries together at the EU level,
to all regions coming together at ICH.

In all of those situations I've seen.

Uh, the best decisions happen
when all voices are at the table.

Now, that doesn't mean you need to be
right, and that doesn't mean you need

to force others to your point of view.

Um, and it doesn't always mean that
there needs to be a compromise.

In fact, you know, sometimes one course
of action is better than another, and

sometimes compromises needs to be reached.

But to me it's more about
providing a perspective that could

be sometimes hard to acquire.

To allow decision makers to
understand pros and cons of their

decision as fully as possible.

And I think if you do that, that means on
average you should make better decisions.

So if I can help people make better
decisions and solve their problems,

and that will help bring, uh,
safe and effective medicines to

patients faster, which is something
I, I remain passionate about.

And then I look at all of
these things that I want to do.

I think the only way that I'd be
able to do all of this, then I felt

that, you know, my consultancy would
be the most appropriate way forward.

Scott Berry: Yeah.

Yeah.

Very nice.

And so, um, you, do you imagine
moving forward, you described

something as a, as a bigger picture.

Statisticians need to be a
part of a much bigger picture.

Understand the science, the
treatment, the disease, and this.

Do you, do you think you're going to be.

Uh, statistician and you described it,
EMA, you know, quiet, contemplative

time to solve statistical problems,
or do you think your, your, your lead

consultant is a much broader thing,
understanding the regulatory role

here, the advice, uh, and higher level?

Are you going to get into details,
statistical details of design?

Andrew Thomson: Um, I see
it as more of a high level.

I would say, and I think one of
the reasons is that I, I have

quite a lot of experience now of
working in multi-disciplinary teams.

Like E 11 A was, um, it was
modeling and simulation, it was

statistics, it was disease modeling.

So clinicians all in the same room.

Um, I've worked with GCP inspectors
and, and methodologists of

many stripes across Europe.

You know, real world evidence.

Clinical trial statistics, modeling
and simulation as, as well there.

So bringing those different
functions together, but also with

that regulatory expertise as well.

So, so having both the, the kind of,
the technical understanding and the

regulatory understanding, I, I think could
be quite, uh, an interesting perspective

to, to, to bring to companies.

Now I don't.

I don't see myself as someone who would
be answering necessarily technical,

statistical questions, but I can
certainly help people understand

technical problems that they are
facing and, and, and making decisions

about, about clinical development.

When you have the kind of tech.

Fairly technical
proposals in front of you.

The pros and cons of those and working
through what's, what could work, what

could not work, and why from a, from a
regulatory perspective as an methodologist

is where I think I can add the most value.

Scott Berry: Yeah.

Very nice.

So, so CIO is, is, is
open, um, uh, for business.

Um, yeah.

Andrew, be happy to talk to you
on, uh, uh, potential problems.

Uh, are you, are you looking forward,
uh, and, and I don't know if you

can do this, are you looking forward
to a day that you sit down and EMA

is on the other side of the table?

Can you do that?

Is there a certain amount
of time you can't do that?

Andrew Thomson: for a certain
amount of time, uh, the EMA has.

Has very, uh, very strong and, and,
and appropriate, uh, conditions

on staff leading the surface,
and, and I will, of course.

Abide by those maybe in the future.

Um, and if that does happen in the future,
then I'll be looking forward to it.

I think there are many other opportunities
like you and you and me, Scott, have

spoken on panels in the past and
at, at academic conferences or DIA

more regulatory focused conferences.

And I'm looking forward to, in a
more kind of informal environment, at

least initially, um, maybe challenging
people with some thoughts that I,

I, I, I can, I can now say for sure.

Scott Berry: Yeah.

Fantastic.

Well, uh, uh, I, I
certainly wish you luck.

I, I, I know we will, we will,
uh, work together on things and,

and happy to send, uh, uh, people
your way to provide your advice.

We're always looking for, uh,
that extra advice and, and.

Really happy that you joined
us here in the interim.

Um, by, by the way.

Okay.

So I keep thinking of new questions,
but, uh, being in the interim,

in a real clinical trial, uh,
role as A-D-S-M-B member, would

that be something of interest?

Andrew Thomson: I have thought about that.

I must admit, um, possibly it's not
something I have experience of, and I, and

I genuinely dunno how one goes about the
process of, of doing one's first, DSMB.

I'll be mentored through the process,
you know, I obviously know a fair

bit about clinical development now
after those years, but I think my

experience as a regulator was that
the hardest decisions you had to face.

Were always the ones where the
results were borderline and, and

whenever you had a decision that was
borderline, it's assuming a clinical

trial has been, um, sample sized
and designed in a particular way.

It, it's borderline for
one of three reasons.

Either the treatment effect
was not as large as you were

hoping for, or the variance is
larger than you were hoping for.

And so there's a kind of nuance that
on those two that you were right on the

relative but wrong on the absolute scale.

And by that I mean you thought you'd
see a say 40%, um, improvement on, uh,

your active and 20% on control, and you
actually saw a 30% improvement on active.

and 15% on control.

So you, you always have
that relative risk of two.

You always double your, your,
your, your chances of a good event.

But, um, but you were wrong on
the absolute scale and so you,

you kind of end up borderline.

My experience is that almost all
clinical trials, uh, that end up

at the borderline are ones where
the treatment effect is not.

As large as you had hoped
it to be at the start.

And if you have actually designed your
clinical trial with, uh, the minimally,

clinically important difference,
then under those circumstances, the,

the point estimate the effect that
you will have out of that trial.

Will definitionally be smaller than the
minimally clinical important difference,

and that is when all of the problems
start because you have something that

is statistically significant, but
possibly not clinically relevant.

People will ask questions about
estimands estimation, missing

data, whether one patient included,
makes or breaks the trial.

Subgroup analysis if you can find a
particular patient population where

it seems to work better than others.

So those borderline decisions
are all really the hardest ones.

And I say that because something like
A-D-S-M-B probably has all of those

really hard decisions as well are
going to be when you are right at

the border, one side or the other.

But you still have to make an
absolute decision to stop or carry on.

Scott Berry: Yeah.

Yeah.

Fantastic.

And that sounded a little bit like
an advertisement for adaptive design

where, uh, we're hoping for this
effect, and the trial can stop here,

but if it's clinically important
and that the trial goes bigger

and it, it gets the right answer.

But, but I won't put, I won't
put words in, uh, Andrew's mouth.

So, uh, uh.

Uh, maybe you'll get a chance to be
in the actual interim, but here, I

I very much appreciate you coming on
and very much enjoyed the discussion.

Thank you, Andrew.

Andrew Thomson: Thanks
for having me, Scott.

View episode details


Creators and Guests

Scott Berry
Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC

Subscribe

Listen to In the Interim... using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes