← Previous · All Episodes
The Saga of the Lecanemab Adaptive Phase II Trial Episode 36

The Saga of the Lecanemab Adaptive Phase II Trial

· 51:46

|

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: Welcome everybody.

Back to in the interim today
we're actually gonna take you.

In the interim, we're gonna take you
into many interims, uh, a unique look

back, uh, at a trial, a drug, a disease.

And I have, uh, a wonderful guest,
Don Berry is here, my father, to join

me, and we're gonna talk a little
bit about the story of Band 2 4 0 1.

And the, the story of a phase two
trial and the before and the after.

So welcome back to In the Interim.

so let's go back to approximately
2013 and the, sorry, state of

Alzheimer's disease drug development.

Something like 14 straight failed phase
three trials, uh, promise of drugs.

But all of these trials failing
and take us back to that time and

set the stage a little bit, Don.

Don Berry: Okay.

Um, it was, as you indicate, uh,
a sorry time, but Alzheimer's,

uh, researchers to their credit.

Understood.

That they were doing something wrong.

They didn't know what.

Um, and the Alzheimer's Association
put together a, a round table, uh,

a meeting in Washington, which had
the goal to study and learn from

the bit of a workshop, uh, the 14.

Latest phase three trials in
Alzheimer's, all of which failed.

And so there was a, each of the
trialists were asked to present

what the trial was and to the extent
that they could, why it failed.

Was it the drug, was it the trial?

They had invited me to
give a presentation.

I mean, they were aware of,
uh, IFI two, which, um, which

was a trial in, in cancer.

But, uh, did, uh, you know, smart
things along the way, and so they.

Like, uh, many, uh, uh, researchers
at the time sort of salivating over

doing something like ice flight tool.

So they, I presented, uh, my
analyses of what the problem was.

Um, one of the, uh, I won't go
through all 14 of course, but

one I want to mention is a large.

A pharma company, uh, that had a
drug that they were looking at for

Alzheimer's, and they had a phase two
trial, and it led to a large phase

three trial that bombed completely.

The person presenting was one of the
instrumental people in the company who was

part of the design of the phase two trial.

It was part of the analysis to go
forward into phase three, and he

admitted, said explicitly, you know,

we didn't know that we
had a good drug here.

We recognized that,
that, uh, we might fail.

The data would not all that.

Uh, uh, consuming of course, what the.

The, the, the, the down part was in the
cognition and, uh, function and, you

know, the drug was getting in, it was
doing what it was supposed to be doing and

the biomarkers that they could measure.

Um, but

it was not, you know, big enough so to
speak, uh, to show cognition and function.

And, uh, so it was an, an admission
and the, and the person wa was asked,

okay, so why did you do the trial?

Scott Berry: trial?

Don Berry: Why did you do the phase three
trial if you didn't have enough evidence

that the drug was going to have an effect?

And he essentially said, well,
yes, we were playing the lottery.

Um, that is, we're taking a chance.

And, you know, the chance
might be best based on a, a

phase, uh, uh, three trial that

Scott Berry: that.

Don Berry: just randomly, you know, 5%
of the time, even though you don't have

anything, you have a, a positive trial.

So, um, uh, he was, uh, I, I,
I thought surprising, and this

happens a lot, uh, that you.

You don't know that you, you
sort of hope for the best.

So we're in World Series mode.

So, uh, we have a, a, a pitcher
that's a great pitcher, uh, and you're

swinging for the fences and you're
hoping to both hit the ball and hit

it right and hit it into the fences.

So there's a lot of randomness.

Uh, let's hope for the best.

Um, the result of that was at, at
the meeting was Veronica Vinky,

who was at the time at Eisai.

Um, and she asked if she could
discuss with me, uh, their

circumstance with, uh, the.

The, uh, ban, uh, 24 0 1.

Um, and uh, so I agreed and I visited
them and, uh, they were enthusiastic

and that's really what it takes is
enthusiastic part of the company.

They were enthusiastic about doing
something different, um, and um,

they ended up doing something.

Different.

And Scott, back to you.

Scott Berry: Okay.

Okay, so sorry.

State of drug development question
of whether, whether it's the drugs,

whether the trial designs, whether
it's the development plan, whether

it's actually patients and endpoints.

Do these patients actually
have Alzheimer's disease?

Are we measuring the right endpoints?

Lots of blame going around.

So as you mentioned, uh, ASCI and
Veronica Lavinsky, Andy Satlin, Chad

Swanson, SBA Dadada Jinping, uh uh, also.

Uh, was involved in this Jinping Wong, so
a large number of people, large effort.

We built a phase two trial
and the notion was this is, we

want a better phase two trial.

We don't want to just jump
into a phase three trial.

So I'll, I'll describe the trial somewhat
briefly, but the trial design had five

different arms of band 2, 4 0 1, the drug.

It had two different frequencies.

The bimonthly frequency had
three doses, and the once monthly

frequency had two doses, so five
active arms and one placebo arm.

In order to incorporate the five arms and
not have a much larger trial response.

Adaptive randomization was used in the
trial, so interim analysis would take

place and allocation would be focused
on the arms that were doing better.

The endpoints in the trial sort
of plays an interesting role here.

12 months and 18 months change from
baseline in a cognitive measure in

this patient population called Adcoms.

Adcoms is a collection of, uh, questions
from MMSE, Addis Cog and CDR sum of

boxes that was thought to be, uh, highly
sensitive in this part of the disease.

And so this was the, the
endpoint in the trial.

In order to drive the adaptations, the
RAR, we built a model, a longitudinal

model that would look at three months, six
months, nine months, 12 months, 15 and 18,

and forecasting forward where the patient
was to identify within patient that

forecast to identify how well the arms
are working and to drive the adaptation.

So a key part of this is what
we'll call the longitudinal model.

Within that at, uh, adaptive analysis,
the trial could stop for futility.

They wanted to be realistic about
this, given the, the, the, sorry

state of, of Alzheimer's development.

They also wanted triggers for
accelerating drug development.

So if a particular trigger was hit,
they would go quickly to phase three.

If, if it wasn't futile and it wasn't
this trigger of phase three, it

would go to the next interim analysis
and it would get more data and more

data during the course of this.

So this is largely the
trial design within it.

Now, the sample size also was,
was adaptive 300 was the minimum.

And then 800 was the maximum.

If we didn't know the answer yet and
we kept going, we'd enroll to 800.

We would then continue interim analyses as
we're getting exposure through 18 months.

The trial design.

Now, this is a unique trial
in terms of disclosure.

Uh, ACI has published.

The result of every one of the interim
analyses during the course of this trial.

And there were 16 of them
during the course of this trial.

So many interim analyses during the
course of this trial, and I think we'll

take you through the story of the trial,
what happened in the going forward,

but any other comments about the design
before we sort of walk through the,

the, the trial and what happened, Don?

Don Berry: So the, um,

the interesting thing is, is,
is what happened in the trial?

uh, no, I think you just get into that.

Scott Berry: Yeah.

Yep.

Yeah, let's, let's, I, I know
where you're going and let's,

let's not quite go there yet.

Um, within it.

Okay.

So all of this is, has been disclosed.

Um, interestingly in the
course of this is that.

Uh, in the trial design, I was
unblinded and part of a group that

was reviewing the interim analysis.

Don was blinded.

Don did not see the data.

He was involved in the design.

We were both involved in
the design, but he was.

Blinded to any results going on
during the course of the trial.

The trial was conducted by a company
called Ella that they ran the interim

analysis, they, they curated the
data, ran the interim analysis, and

then I was involved in reviewing
them with Christine Broo, who's

also here at Berry Consultants.

So the first interim analysis in the
trial happens 196 patients enrolled.

It happens in the date of this,
I don't want to get this wrong.

The date of this is March 2nd, 2014.

So the trial design, as Don described,
happened in the time of this, and at

that time, randomization is updated
for each of the arms in the trial.

Even at this first interim
analysis, the randomization.

And this is published, um, with
this, and we'll, we'll add a link

to this is randomization to the two.

Higher doses is increased and each of
the other three doses is decreased.

The placebos held constant, the
trial does not hit futility, uh, it

can't hit success at this time or, or
flagging to ACI to go to phase three.

So the trial continues on.

One comment about, uh, flagging to ASCI
is they were interested if there was a

high posterior probability that their
effect was at least a 25% slowing.

That was kind of this
important trigger for them.

If they were highly likely of a
25% slowing, they wanted to know

that, be told that, and they
would accelerate to phase three.

If there was a low probability of a 25%
slowing, they would stop for futility.

Okay?

That continues on every 50 patients.

So at two 50 and 300 similar analyses go
on now at, at, um, 300 is where, uh, an

an unplanned event happens in the truck.

Don Berry: So can I interrupt you and,
and, and say a little bit more about, uh,

Scott Berry: Yep.

Don Berry: the 300, um, when we talk about
these things, a lot of people say, oh,

well you need of course, complete data.

Um, this is not complete data.

This is all the patients in the trial.

Nobody is at the, uh, 12
month point at this time.

The first analysis.

Um, but we're using the longitudinal
model to predict what's gonna

happen in the, in, in the future.

So, um, and the, the most
interesting thing that happened at

this first analysis was that the

algorithm was already learning
that the two lowest doses,

Scott Berry: doses,

Don Berry: um,

Scott Berry: um,

Don Berry: were not effective.

And so it essentially zeroed out.

It didn't, it wasn't quite zero.

Uh, the first time the adaptive
randomization, um, and it was built

so that they could come back if the
longer, uh, term data pointed out that,

you know, you're missing something
and, and now it's the low doses are

actually doing well that never happened.

Um, the something to address
Scott and something that, um.

The journalists addressed and
quoted people in the research

community was the many interims.

I mean, it was just taken as a negative,
you know, you're, you're doing all

these things and you're gonna be misled.

And, uh, there was lots of criticism about
simply the multitude of it, of analysis.

Uh, would you comment on that?

Scott Berry: Yeah, and, and, and we'll
come back to this, uh, in, in part

of the story, um, and it's general
and, and now there's this pervasive

viewpoint that it's too complicated,
these number of interim analysis, but

every 50 patients and analysis was
done, and a key part of the design is.

We're, we're only stopping at that point.

If we have strong evidence, a 95%
chance that we have at least a 25%

slowing or less than a seven and a
half percent chance that we have a.

25% slowing.

So, and, but if, if we're not there,
we're gonna continue on in the trial.

So, and then we're gonna update the
randomization as Don described in that.

So this trial and its ability to update
was critical and the analysis every 50

patients enrolled was critical as we'll.

Describe what happened
in, uh, as the trial goes.

Don Berry: So, as Scott
mentioned, um, the,

the,

fact that these are available, the
interim analysis are available, one

of the places is, um, a publication
that, uh, I was first author and

Scott was last author in JAMA Network.

Uh, open.

And you, what you should do, uh,
dear viewers, is go to that, go

to the supplemental material, and
you can actually watch a movie.

You can, uh, show, uh, a movie of what
happened in all of these analyses and

how did things change Naturally, there's
correlation from one to the next, but you,

you're looking at the tendency of, uh, of.

Of where it's going, what it's doing,
it's assigning to the higher doses.

Uh, and control, along with it
control had the same probability

as the highest, uh, uh, probability
experimental, uh, uh, drug.

Um, and it's, it, it's very instructive.

It's, it's not, these are
obviously not independent.

We're taking, uh.

Uh, uh, these serial looks, uh, and it's
one of the few studies that, uh, all of

the interim analyses are are published in.

Back to You, Scott.

Scott Berry: Yep.

So the first interim analysis, and, and
Don described that it, it's triggered by

patients enrolled and I, I'll back up.

The, the, the, there's two Bayesian
models that are critical to this.

One is a dose response, so the 12 month
change from baseline and Addis Cog is

modeled in a two dimensional, normal,
dynamic linear model over the five.

Arms and placebo.

The other one that turns out to be
even more critical is we built a

model between each visit and 12 months
for the correlation between them.

Built a linear model between each one
of these, that a patient's change from

baseline at 3 6 9, wherever their last
exposure was, is used to estimate what

is their value going to be at 12 months.

Multiply, imputed through the
model to estimate the dose

response and drive conclusions.

So, as you're flipping through this, as
Don describes at 300 patients, it's a bit

of a a fun movie to watch at 300 patients.

The data's at its worst
relative, the arms to placebo.

The placebo's actually at
that point estimated with high

uncertainty to be the best arm.

And the probability that the 10 bimonthly
dose is better than placebo, I'm sorry,

is as a 25% improvement over placebo
is 18% had that have dropped below 5%.

The trial would've stopped for
futility, and that's the closest

it's ever got to futility.

It moves on to three 50.

So now by the way, we are in
the, uh, September 26th, 2014.

So we've moved on about seven months.

They've enrolled 250 patients,
they're on their fourth interim

analysis, and in enrollment is
temporarily paused in the trial.

There's a safety committee that's, that's
looking at the data and they notice

a, not a, a, a radiological finding.

It's, we now understand this in the
diseases aria, and it is a radiological

finding that looks like microbleeds in the
brain, and specifically for the highest

magnitude dose, which is 10 bimonthly.

They are noticing this for
patients that are a POE four

positive, they are asymptomatic.

So these are not symptomatic
bleeds in the brain.

They're asymptomatic, but they
were concerned about what is this?

To the extent that they said ASCI could
not enroll that high dose, one of the

five arms could not enroll patients
A POE four positive on that arm.

So we've built this
adaptive randomization.

For, um, all of the different
arms, one of the stratification

factors is a POE four positive.

At that time, the decision is made by
people blinded in the trial that we're

just gonna separate that randomization
for a POE four positives and a OE four

negatives, and there'll be different
randomization, adaptive randomizations,

which will zero out the 10 bimonthly.

And the trial continues

Don Berry: Uh, more over the patients
who had been assigned to that.

Within the last period
of time would be stopped.

So it was, you know, it was like
a, an a bomb dropping on the trial.

And, uh, a I was very concerned.

They knew, of course, about this,
they got the message from the

regulators and they had to do it.

Uh, and they talked to me about,
you know, canceling the trial.

It was such a huge, uh, negative Scott.

Scott Berry: So the, the, the, this is
one of those cases where an unplanned

adaptation was able to, very quickly,
by the way, Ella did a fabulous job

that now it's randomizing patients with
this, this, uh, separated randomization

for them and the trial continues on.

At this point, by the way, at 350
patients in it wants to randomize.

It is absolutely zeroed out.

The three low doses, the 2.5,

the five monthly, and the five
bimonthly, it wants the two high doses.

So a POE four negatives are getting
the 10 bimonthly at a high rate.

The positives are getting, uh, the
only, the 10 monthly at that point,

and it is already focused at this
point in the trial on the high doses.

And now the estimated efficacy,
the probability of at least

a 25% slowing is about 55%.

Again, if that's 95, they would've
signaled to ACI back in 2014 to run

phase three, but it did not signal.

It says, you know, the
data's good, but keep going.

Its estimated efficacy.

That point is approximately a 25% slowing.

This continues on, and interestingly,
and so now at this point, only about 15

patients per dose have passed 12 months.

And it's estimating that that
high dose has about a 25% slowing

through this longitudinal modeling.

The early patients enrolled are
starting to move Past 12 months, nobody.

Don Berry: Data and the Scott
Berry's model for missing data,

Scott Berry: Yeah, so

on the 10 bimonthly dose, a POE,
four positives that had less than six

months exposure were removed entirely.

So there were different
kinds of missing data.

There were patients in
the trial who went away.

There were patients that were forced to
go off drug, and then there were patients

that were enrolled so frequently that
we just didn't have their 12 month data.

All three of these types were being
imputed based on their last cognitive

status in the multiple imputation model.

It 500 patients enrolled.

So this is, um, June 19th, by the way.

These interims took two
or three days to run.

The data was there, the, the algorithms
were run that largely took a day.

They were reviewed in 12 hours a day.

Uh, they were signed, that
they were good to go, and they

were implemented immediately.

The information was passed to a
data safety monitoring board, but

these were done essentially fully
automated with a review to make

sure nothing weird was going on and.

The trial continued.

At this point, it's estimating about
a 25% slowing, which happens to be in

this really interesting part for aci.

If there's high probability, it's better
than 25, they wanna run phase three.

If there's low probability, they
would wanna stop for futility, and

it's hovering around that uh, point.

Don Berry: And that's on the basis of
the imputation of the missing data.

Had you looked at just the raw data
and ignored this, you would've had

something like, I don't know, 10%
or something, a lot less than 25%

and the the DSMB um.

they happy with the the model, or
did they react to the model because

everything's driven by this model.

Scott Berry: Yes.

They, they, they had no
concerns with the model.

Um, in it, we had, we had, we
actually, uh, created and we only

interacted with them infrequently.

The interims took place.

We said, statistically
they're, they're, they're good.

And they signed off on it and continued.

It was this one aspect of the
a OE four positives, and that

continued throughout the trial.

This continues on at every one of
these interim analyses, and so the

trial gets to its full enrollment.

It actually over-enrolled 856 patients,
uh, ended up over enrolling in the trial.

And at this point in the
trial, this, this takes place.

Full enrollment takes
place in September of 2016.

All patients are enrolled at this point.

The only thing it can stop for futility.

And it never got close to futility
again after that third interim,

or it could signal to aci.

Now, there was the ACI built in a really
cool part of this trial that this all

is being built, uh, is being triggered
on 12 month change, but all patients

are being followed through 18 months.

And the trial's gonna run and get
full exposure through 18 months.

And at 12 months, this was billed
publicly as, uh, you know, we'll find

out if this is gonna go to phase three.

And at this point they have that.

If there's an 80%, they
lowered the threshold.

If there's an 80% probability
of a 25% slowing, they wanted

to go to phase three in that.

And again, they could stop for futility.

They found out at this point it's not 80%
and the actual value, let me find out what

the actual value of this was at the time,
um, that this interim analysis happened.

Oh, shoot.

Uh, at the time this interim
analysis happens when all patients

are enrolled, the probability of at
least a, um, uh, 25% slowing was 70.

Was 65%, so that needed
to be 80 and it was 65%.

So it didn't jump that 80.

So ACI publicly disclosed this that
we're continuing exposure in the trial.

We don't know the answer yet.

And interestingly, I'll,
I'll go to a snapshot of.

A, an article that was written
by the Alzheimer's in the

Alzheimer's forum about this result.

And they interviewed you,
Don, and asked you about this.

And they also interviewed
a neurologist, a well-known

Alzheimer's trialist, uh, Paul Aen.

So they asked you about this result
that at 12 months, we don't know

the answer yet, and Donna's blinded.

And he says the ban 2 4 0 1 trial
results so far to the extent that

we know them, and all he knew
was the trial was continuing.

It wasn't 80%, and it wasn't less than
seven and a half indicate the therapy

is neither extremely effective nor
clearly ineffective on balance and in

context of the, sorry, state of drug
development in Alzheimer's disease.

That's good news.

Was Don's quote at this, and we are now
six months from full readout of the trial.

Uh, and all they know is it's not 80
and it's not less than seven and a half.

Paul Azen takes an opposite view,
and, and I'll read you his quote and

I think it's kind of interesting.

Um, so what do these
results mean for the field?

Not everyone is convinced that adaptive
trials are the wave of the future.

Paul Aisen of the University of South
Carolina, uh, California in San Diego

commented that while adaptive designs
have proven useful in the oncology field

where endpoints are more objective.

They may be problematic in ad
trials, Alzheimer's disease trials.

The great and now quote, the
greatest concerned about a concern

about Bayesian adaptive approaches
in ad trials relates to our

cognitive clinical outcome measures.

These measures are highly
variable and noisy, adapting

on interim incomplete data.

Risks chasing noise and making erroneous
decisions regarding efficacy or futility.

Don Berry: So I'm, I'm just compelled
to say I know that it has noise.

I know this variability.

I recognize it.

I modeled it.

I live with it.

And I can tell and build into the
algorithms the uh, and the design,

what's real and what's noise.

Scott Berry: And what, what was
so misunderstood by Paul is that,

what did the adaptive design say?

It said, we don't know the answer yet.

Let's keep going.

We need more data.

Uh, we, we, we don't know the answer,
and his criticism was, they're, they're

too risky of making erroneous decision,
claiming success or futility, when in fact

it knew that the variability and it knew
keep going, go to 18 months, uh, in it.

So think there was this sort of doubt
and it was built in and part of it.

Okay.

Flash forward now six months.

And the trial finally reads out 18 months.

And the fi by the way, the, the
randomization, so the adaptive

randomization, what's the role of
this adaptive randomization there?

Out of the 856 patients enrolled
in the trial, 238 go to placebo.

The lowest dose gets 52 patients.

The next gets 48.

The other five milligram gets 89.

Those are the three doses that very
quickly are moved away from the two

high doses, get 2 46 and 152, and the
only reason it was 152 is because it

couldn't enroll a POE four positive.

But at that time, out of the, uh,
out of the band 2 4 0 1 doses,

587 get put on band 2 4 0 1.

Two thirds of them, 67%, 398
are on the two high doses.

So the response adaptive randomization,
when Don goes back to the Alzheimer's

and says, you need better dose
finding trials with better exposure.

To understand efficacy, you
need to explore dose it.

It did that and through response.

Adaptive randomization very
quickly focuses on the high dose.

Don Berry: And so you get
greater precision associated

with the high doses, exactly.

The doses you're thinking
about moving forward.

So it's, it's a huge benefit.

But I wanna point out that this
is an attraction for patients.

They can be told in the design that
if there is a dose that is doing well,

we're gonna keep looking at that.

And so you are, uh, if that has happened
in the trial, uh, you're more likely

to get a dose that's doing well.

And not only in this trial, but in other
trials that we use adaptive randomization.

We find out that that's exactly true, that
the, the end result is that the patients

in the trial, uh, you know, this has been
something that has been vetted around

for, uh, decades, uh, in jama, et cetera.

Is are, are patients in clinical
trials treated better than patients?

Not in clinical trials, and
some say yes, some say no, and

there's data on both sides here.

It's clearly the case.

That if you're in an adaptively randomized
trial and there is something that's

positive, you're more likely to get it.

So it should be an attraction
to not only physicians, um,

uh, but also their patients.

Scott Berry: So the trial
reads out at 18 months.

And the two high doses that I described
with the larger sample size, the 18

month, uh, estimate for the 10 monthly.

So this is the, the high dose that's
given monthly, and then there's the

high dose that's given bimonthly.

The 10 monthly is estimated
to be an 18% slowing.

And its probability of being
better than placebos, 93%, but

it isn't estimated to be 25.

The high dose given twice monthly
is estimated to be a 27% slowing.

And again, think about where this is
relative to their targeted 25 and a 76%

probability that it's at least a 25%
slowing is the estimate of that dose.

So Asci at this point
does a couple things.

Uh, it starts a phase three trial
and it selects the 10 bimonthly.

At this point, aria is much better
understood, and there's actually

some sense that this is a level of
efficacy of the clearing of amyloid.

So in the background, the other thing
is the mechanism of this treatment is

the removal of the pathological amyloid.

It and they're doing pet scanning and they
it, the high dose is absolutely removing

amyloid within this and this ar Aria's
thought perhaps to be in effect of this

in some sense, almost that it's showing
benefit by this demonstration of Aria.

Uh, it's still a concern.

And, and of course, uh, uh, if
symptomatic bleeding or a higher

level of bleeding would of course
be a bad thing within this.

So they run this phase three trial,
but they also submit the data to the

agency, the phase two data to the
agency, um, uh, and present to them.

So they plan a trial of the 10,
bimonthly, enrolling both a POE

four positives and negatives.

And, uh, 1,795 patient trial is enrolled.

The, um, the meanwhile, the FDA
reviews the data, they give accelerated

approval to Ban 2 4 0 1, which
has now changed its name to lecan.

They grant accelerated approval based
on the reduction in amyloid, and there's

a stunningly cool data that if you
look at the reduction in amyloid for

the lower doses relative to the higher
doses, and you look at the change

in the cognitive values for those
doses, it's incredibly predictive.

And I think the fact that the multiple
doses in the effect on amyloid re relative

to the effect on the clinical outcome is
very highly predicted by the effect on it.

Almost demonstrating, and I know
we shouldn't say the word, a

surrogacy of this, I think enabled
them to trigger accelerated.

Now the phase three trial reads out
1,795 patients reads out, and the

press release, uh, for this comes out
in 2022 and it demonstrates highly

statistically significant benefit on CDR
sum of boxes for that dose at 18 months.

And the estimated slowing of
the rate of decline is 27%.

It was nailed.

Uh, normally we, you know, to variability
we wouldn't have expected exactly, but

it was, it was exactly the estimate from
the phase two trial in that Bayesian

modeling that's doing the longitudinal
modeling, that's doing the dose response,

estimated it to be a 27% slowing at 18
months and it hit it right on the head.

In that which ended up to be highly
statistically significant, and FDA

gave, uh, kinumab full approval for
the treatment of Alzheimer's disease.

Don Berry: I go back to that meeting we
had with the 14 failures in phase two.

Uh, it, it's hard not to wonder,
uh, whether if they had done

something in those 14 trials.

Like this, you know, saw something
that was positive, uh, and increase

the sample size accordingly.

If we wouldn't have better treatments
today for Alzheimer's disease.

Scott Berry: Yeah, it's, it's
hard to go back to this trial,

but I, I describe a couple things
happened that I think was powerful.

If you go back to the algorithms, and you
can go back to this and, and look at the

posterior estimate of them when there are
500 patients enrolled in this trial, um,

and this happens, 500 patients enrolled.

This happens in 2014.

The algorithm is estimating this
25% reduction, and it holds pretty

constant all the way through the
end that it's giving this forecast.

Now, uh, I, I think.

A side did a great thing by getting this
exposure and this decision, but it, it

pretty much forecasted at that point.

But I also alluded to back at 300
patients when 300 were enrolled, this

analysis took place for futility.

The data were not promising.

Had they run a 300 patient trial, a
small phase two, they might not have

ever continued development of this drug.

Okay, so the alternative designs that
could have been run may have led to

this never being run in phase three.

It's hard to know, and it's hard
to know the decisions that would've

been made, uh, in this scenario.

Don Berry: And also had they
used something other than.

A reasonable and somewhat sophisticated
missing data imputation process.

Instead of that 27%, uh, that, uh,
uh, they observed in the phase two

study, uh, and they looked at just
completers, it would've been 12%.

Part, and this is part possible to
explain on the basis of the, the fact

that the missing data were precisely, if
you look at what the missing data were,

uh, for patients who had been assigned
to the high dose, that distribution was

very different from the missing data
distribution for the, uh, uh, other

doses and for example, for control.

Um, and so the model recognized that, um,
that it was, uh, that whether the data is

missing, of course, the, the model didn't
know about this, uh, regulatory, uh, folks

that, uh, mandated, uh, dropping that, uh,
dose from the future part of the trial.

Um, it only knew what the data
were and was basing it on what

the longitudinal modeling was.

So it was, there, there were a number
of, uh, it's sort of a perfect storm,

or not a storm at all, but perfect
weather, uh, that, uh, everything was,

uh, moving in the right direction and,
um, it gave rise to a positive conclusion.

What's the latest?

I mean, the umab has been fully approved.

Uh, now another, uh, phase, uh, phase
three trial has been, uh, observed

with, uh, similar mechanism of action.

And can you say something about
the current status, Scott?

Scott Berry: Yeah, so Umab from
Eli Lilly has been approved full

approval, uh, and it demonstrates
slightly better, uh, slowing of

cognition, I think at, at 18 months.

But now I think there are two
treatments that are disease

modifying that have been approved.

And interestingly now where Alzheimer's
was a bit of a a, a dead end.

For pharmaceutical development.

There's a huge investment now in part
that, that clearly that that, that

affecting amyloid has an effect and
it has an effect on slowing disease.

Now, the, going into tau going into
better effects of this there, there's much

more in the, in the way of development.

I, I, I wanna point out
something interesting.

Don Berry: We always
wanted to change the world.

Scott Berry: the, yeah.

Don Berry: I.

Scott Berry: Yeah.

And so the interesting thing is that
trial design was run meanwhile, A,

there were more than ACI that were
interested in Don's messaging of

better phase two trials and Abbott.

Was one of them.

And Rob Lens was the clinical, um, in, in
charge of neurology at Abbott at the time.

And they ran two trials, one of
them a BT 1 26, which is published.

The results of that stopped for futility.

And I think largely that the conclusion
was that the drug didn't work.

So a very similar design run in that
scenario with a different drug sort of was

pretty strong that the drug doesn't work.

And I, I think that's
the right conclusion.

So the design doesn't make the drug.

But hopefully the design gets the right
answer for the drug, and I think it's

two examples of putting in a different
profile to a very similar design.

Gave very different answers
at the end of the day.

Don Berry: There was an interesting
discussion with Rob and the, uh, clinical

trial committee at, uh, uh, at at Abbott.

They asked him at, when, at the beginning.

So this is a complicated trial.

I mean, it had many things,
many uh, aspects, very, uh,

much in common with, uh.

Uh, the 24 0 1 trial.

Um, how much time did it take to do this?

And Rob said,

you're right.

It took a while.

It took us like six months working
with the Berry's to get what we wanted.

Um, but on the other hand, the trial
ended two years sooner than it would

have otherwise because we were doing
the right thing in investigating.

And, and, uh, understanding that we
really did hit futility with that drug.

So it's good in both directions.

Scott Berry: Yeah.

Yeah.

I, I guess a last thing it would
be good to come back to is you made

this comment, so in the trial there
were 17 analyses of this data.

The data's accruing, it changed the
randomization, but this notion that.

Uh, what's beautiful in the Bayesian
perspective and is that the analysis at

the end of the trial is unaffected by the
fact that all these analyses take place

within it, and it makes so much sense.

And, and there's this view that
somehow these 17 analyses Infect the

data that it no longer tells us with
the same level of strength and in in

viewing this and seeing the results
go through and look at this and just.

We should be doing this in
most of our phase II trials.

It's good for patients, it's
good for drug development.

It is not a bad thing that
these analyses take place when

a snapshot of the data are done.

It doesn't change the data, it
doesn't change the scientific

conclusion, and I think this is a
beautiful example of exactly that.

Don Berry: And in a way it's, uh, it's ai.

You know, everybody's talking
about ai, but this is a

prospectively designed trial.

As an algorithms, it's run it, the
algorithm is like an automaton.

Um, and so we can evaluate as we did, uh,
the type one error associated with it,

the power of, uh, the study, um, and the,
uh, again to sell at Tom Park, um, working

on the logistics associated with it.

Uh, you know, we, uh, uh, hearkening back
Tom, first work, Tom and I were first

work together in a Pfizer trial where
we had daily stroke trial, where we had

daily updates and the algorithm would
re randomize, uh, would reassess the

randomization probabilities every day.

Uh, and this kind of thing.

There is a logistical aspect of this.

Um, but it's possible to do,
and it's possible to have

something that's rigorous.

And, uh, and the FDA has issued
guidance to, um, uh, uh, address the

question of the, uh, complex, the
adaptive complex adaptive designs.

Uh, so the FDA is wholly involved here.

Not only the.

Neurological part of the cancer
part, but other parts as well.

I.

Scott Berry: Yep.

All right.

So, uh, again, uh, jama.

Network open, uh, Barry at all.

Uh, Lecan map for patients
with early Alzheimer's disease.

Bayesian analysis of a phase two
B dose finding randomized clinical

trial has a really unique look at,
uh, uh, a trial that, uh, with a very

cool story and a very cool impact.

And so here on, in the interim.

We've taken you through 17 interims and
a look back at a, at a pretty cool trial.

So Don, thanks for joining me.

In the interim, I know you could not
join me in the interim in that trial.

Uh, but thanks for joining me here

Till next time.

View episode details


Creators and Guests

Scott Berry
Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC
Don Berry
Guest
Don Berry
Berry Consultants Founder & Senior Statistical Scientist

Subscribe

Listen to In the Interim... using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes