← Previous · All Episodes · Next →
Seamless 2/3 Trial Designs Episode 17

Seamless 2/3 Trial Designs

· 45:48

|

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: Ah, welcome everybody
to, in the Interim, a podcast on all

things science of clinical trials,
statistics of clinical trials.

I.

And, uh, we, we, we pose this question
of, in the interim, in adaptive

trials, and so we've got some talk
of other adaptive trials today.

So I have some really cool guests.

Today.

We're gonna talk about a
particular paper that came out.

We'll introduce the paper and I
have the, my co-authors here, so Dr.

Kurt Veley.

Who, uh, sometimes hosts this show
and has done a number of the podcasts.

And Dr.

Joe Marion and Dr.

Lindsey Berry have joined me today.

So welcome everybody.

Joe Marion: Hey Scott.

Lindsay Berry: Thanks.

Scott Berry: So what is the, the paper?

The paper is, it's been published.

The title is Optimal Sample Size
Division in Two Stage Seamless Designs.

So as, as Joe just reminded me,
the math isn't necessarily new

here, but I somewhat to me a bit.

Uh, inaccessible.

And so what is the, what is the problem
and, and what are we doing historically?

Louis Scheer referred to this
as there's learning trials

and there's confirming trials.

You run a phase two trial.

You look at the data, you make decisions,
you run a confirmatory two arm phase

three trials, and these don't mix well.

The world is changing a little bit.

And, and as, uh, Don Berry always
says to us, you're always learning

and you're always confirming,
you're always doing both.

So we're gonna meld these together into
a seamless phase two three trial design.

So a very simple example of this
might be three doses in a placebo.

In stage one of the trial, we're gonna
confuse and say phase two, stage one.

So stage one of the trial, maybe
50 patients on each arm, fixed

randomization, pick a dose.

Seamlessly.

The next day, you're enrolling to that
dose in placebo, maybe a hundred patients.

On each of those, the, the selected
dose and the, uh, placebo seamless

refers to this moving from stage
one to stage two of the trial.

This is made by algorithm or by DSMB,
but made within a couple days, and you

now you're just enrolling part two.

And these will be inferentially seamless
in that the analysis at the end of stage

two is going to include the patients
from the first stage of the trial

and in the second stage of the trial.

So these trials, you'll,
you'll read about them.

You'll see about them.

What was inaccessible to me is.

I we're used to group sequential
type trials where you spend alpha.

So let me, I, I'm, I'm speaking
way too much and I, I said I

wasn't, I was gonna be the ho.

So, Kurt, why do we have to adjust
alpha at the end when we're combining

stage 1 and stage 2 together?

Kert Viele: Uh, so I mean, if, if
you are looking at multiple doses,

you have multiple chances for
each of those doses to be picked

and go on into the next phase.

When you're actually using that data, you
expect that whatever dose you pick, it

might have a slight upward bias because
it was picked among multiple options.

And so whenever you see that
kind of cherry picking, I.

You expect that there needs
to be some kind of alpha

adjustment that goes with it.

And so what we tried to do here
was to have nice, simple formulas

for here is the adjustment.

Here's something, you know, you take two
seconds on your computer and boom, this

is the threshold that you need and you'll
control alpha You'll control type 1 error.

Scott Berry: Okay.

When I, when I've seen these
historically, we've designed these trials.

There's a procedure out there
for controlling type one error.

Uh, the closed testing
procedure Bretts it all.

I.

I.

find it awkward, somewhat inaccessible,
that you combine the P value from

both parts of this, and we're gonna
come back to this, but it doesn't

do what we do in a group sequential
trial where we, we understand you

need to control type one error and
we could adjust alpha at the end.

Can we figure out what alpha?

We need at the end
that's lower than 0.025.

We're gonna talk about one-sided
type one error in that.

That's lower than that,
but, but the closed testing

procedure doesn't give you that.

So can you do this?

So this, this was, then it becomes very
accessible, almost like group sequential.

So Joe, how did we.

Uh, you know, I, I know
this math is out there.

How do you solve this problem?

How do you calculate an alpha at the
end of stage two given the trial design?

Joe Marion: I mean, do, maybe we
should tell the story a little bit

of like how this came to be, right?

Like you, you came to us.

As like a company with this,
uh, with this question, right.

Uh, we have a weekly meeting where we
talk about statistical topics, cool things

we're doing in designs, things, hard
problems, we don't know how to solve.

And, and you, you brought this
topic, you said, I want to figure

out how to do this adjustment.

And so I think, and this is
something that like Lindsay had

worked on a long time ago, right?

Lindsay, when, when did you
first start working on this?

Lindsay Berry: Yeah, that's right.

I worked on this a little bit in
my undergrad at, uh, UT Austin.

I was working on my thesis with,
um, Peter Mueller at ut, and we.

We addressed this and the approach we took
there was using simulations to find the

adjusted alpha threshold that you would
use at the end of a phase three study.

Um, which is one approach
to, to doing this.

And, um, you know, at the time I wasn't
aware of the math, you know, it exists

Joe Marion: speaks to that
point, right, that we did.

We just like didn't know
this was out there, right?

Yeah.

Mm-hmm.

Scott Berry: So what is the math?

How, how do you, so simulation,
straightforward, but um,

how do you do the math?

What?

Can you set this up?

Joe Marion: Yeah, so, so this comes
from, uh, group sequential theory,

and I think Kurt and I both, after
you, you gave that talk, I think Kurt

and I both went away and tinkered
individually and within a week we

both had a solution to this, right?

And the idea is that at the interim
analysis, you have a z statistic, your

data can be represented by a Z statistic.

Um, that sort of, uh, is approximately
normal and you have a Z statistic in

each of your treatment arms, and then
you can ident and because you know that's

approximately normal, you can get a closed
expression for the CDF of the maximum.

So the math assumes you choose the
best dose, the one with the largest

Z value, and then you can use, and
then you know that your data that

you collect in the second half.

Also follow a z statistic.

And so you can, uh, use Quadrature to
figure out sort of the exact correction

you need, um, controlling for the
correlation between the, the data at

the interim analysis, and then the
full data set that you have at the end.

So that's, that's kind of the math.

It's, it's nice, um, you know,
uh, normal theory kinds of stuff.

I think it's three or four
lines, um, worth of math.

Uh, it's very elegant.

It produces a very nice solution.

Kert Viele: I, I think one of the,
like Scott, if I, yeah, the um.

One of the things I thought really
is nice about the intuition here is

it depends on where the interim is.

So if the interim is very, very
early, so you do a really tiny

phase two and a big phase three,
you're not reusing a lot of data.

You don't have to take a penalty
because there's virtually no.

Reuse.

On the other hand, if the interim
is very, very late, you basically

spend all of your time in phase two.

It's not quite a bon ferone,
but it gets pretty close to it.

So the extremes, the real question is,
well, where am I gonna put the interim?

And I think this gets into the part
where the paper starts to break

a little new ground, is knowing
what to do with a fixed interim.

Where do you put the interim
to get the optimal design?

Scott Berry: Yep.

Yep.

So you brought that up, the two extremes.

Suppose we ran a seamless two three
trial where all I, I gave an example of

three doses, uh, 50 patients on each.

But if you ran all three
doses to the end of stage two.

And you go to the end and now
you can pick any one of the doses

to be statistically significant.

You get an alpha that comes out of this.

It's essentially Bonferroni.

You get 0.0,

uh, 0 8, 3 3, so you know, 0.025

divided by three.

'cause you carry them
all the way to the end.

If you picked a dose and you didn't enroll
any patients in stage one and you just,

you run a two arm trial, you get 0.025.

So this calculates as a fraction of the
stage one size, what your alpha spend is.

And we have R code for this.

By the way, did we, did
we provide the R code and,

and so people can

access the R code as part of the paper?

Yep.

Joe Marion: It's, like plain text.

It's not a package.

Scott Berry: Um, uh, so by the way, the
paper is in pharmaceutical statistics,

uh, came out and it's uh, uh, Barry
at also optimal sample size, division

and state two stage seamless designs.

If you want to grab that code,
so Lindsey, I know you have the R

code right there in front of you.

So if I'm gonna run a trial that
is three doses, 50, 50 50 50 in

stage one, and then a hundred on
the selected dose and a hundred.

On placebo.

So at that point you've picked a dose
and now you're enrolling those two.

What is my alpha at the end of that
trial, if I include the 50 from the dose

selected and placebo from stage one with
the hundreds, I have 150 on each arm.

What's the alpha that I use at the end?

Lindsay Berry: Yeah, so in that example,
your information fraction is, is a third.

So a third of the patients in your
analysis are coming from phase two.

And in that example, your
adjusted alpha would be 0.013,

essentially, so slightly
higher than the 0.0083

from a Bon Ferone adjustment.

Scott Berry: Okay.

And so you can calculate now, you can
run the trial, select your dose, run the

end of of part two, and you can run with
this adjusted alpha at the end of it.

Okay.

Now, uh, I, I know we, we have a picture
of what the alpha spend is a function

of the information fraction, and I was
fascinated by what would that look like?

Is it linear?

Is it, uh, you know, I, I know is, is

concave up?

Do you, do you spend

a, a lot of the alpha initially?

Does it take to, to the end, uh, of
that, um, uh, who wants to describe these

curves for people who cannot see them?

Kert Viele: I nominate Lindsay.

Joe Marion: feel like we
should just share it, right?

Well, I mean,

Scott Berry: Well, most people consume the
podcast only, you know, they're driving to

work.

Only

about 10% of people actually see us.

Um,

Joe Marion: I feel like all
the best podcasts are about

describing visual things too.

So this is very on point for the, the

Kert Viele: yeah.

Don't take your eyes off the road.

You know I crashed because I
was looking at an alpha chart.

Not a good excuse.

Scott Berry: Okay.

Let me, let me ask this

question, Lindsay, to describe
this if I have three doses.

Uh, uh, well, actually, our, our picture

doesn't

Lindsay Berry: doesn't
have a three on there.

Scott Berry: so let's pick two doses.

Very, very simple.

I have two doses in stage one, and
then I pick one and I go to stage two.

W um, in that case it's, it's 0.025.

If I just ran a two arm trial the whole
way at the end of the trial, it's 0.0125.

Where is it halfway?

Between that, what information
fraction can you, can you, can you

pick up from that?

So that would be, um, uh, within that.

Joe Marion: Yeah.

Lindsay Berry: is about an informa
information fraction of 0.2.

So if you spend.

Kert Viele: I, I, I thought this was
rather fascinating 'cause the first time

you see this graph, your thought is, oh,
I'm spending a whole bunch of alpha with

even a small exploration in phase two.

And I think if you just stare at that,
your first thought is, oh no, I don't

want to spend that much Alpha, maybe I
should get out of phase two really, really

quickly, which isn't the right answer.

So

Scott Berry: no, no, you're,
you're going ahead in the paper, so

Kert Viele: I know, but it's a, okay.

Scott Berry: the best part of it.

Lindsay Berry: Yeah, I mean, I think
what Scott's referencing here is

when you look at the figure, you
see these curves representing, you

know, for this number of experimental
arms, what is your alpha adjustment?

And from going from zero, you're
spending zero time in phase

two to 10% of your patients.

In phase two, you have a huge drop.

So the penalty, I'll use Scott's
favorite term penalty is incurred with

very little exploration in phase two.

And then with additional exploration,
the, you know, added penalty you

take is, is not very much larger.

Scott Berry: Oh, okay.

So you said the word, um, and this
gets to the question, is it a penalty?

Is this a good thing to do or not?

So you have to adjust the alpha at the
end of stage two, but I get the benefit

of including the data from stage one.

I.

But I have to adjust my alpha at the end.

And, and, uh, uh, a recent podcast, I
lamented the use of the term penalty.

That this is an adjustment of alpha.

In many circumstances, you improve power.

So it's a, it's a bad term
to, to talk about distributing

or allocating your alpha.

Is this a good thing to do, Lindsay?

To, to adjust your alpha and
include stage one with stage two.

Lindsay Berry: Yes, it is a good
thing to do in terms of having

higher power than the same design.

Where you only analyze stage two
data, but you get the full 0.025.

Scott Berry: Okay, so we, we investigated
this question and we looked at multiple

dose response, potential dose response
curve, where different number of doses,

different dose response curves, and.

Fair to say that in every single
scenario, it is more powerful to include

stage one data, adjust your alpha.

You have higher power in that scenario.

Okay, so.

Here we are through the paper at this
point where you can calculate with,

uh, a very simple, uh, r package.

It does a quadrature integration.

The alpha that you have at the end,
you can write it in the protocol.

At that point, it's improve power.

It's a good thing to do, uh, in that.

So before I pose my next question,
let me pause and make sure everybody,

uh, keeps me honest where we are.

Joe Marion: I'll add a little
caveat, which is that all the

calculations in the paper, sort of
assume the endpoint is very quick.

Scott Berry: Yep.

Joe Marion: All right.

That the, that the time to the endpoint
is, is fast relative to the accrual rate.

And I think where this can break down a
little bit in terms of being always useful

is if the endpoint is actually quite long
relative to the acre recruitment rate.

Um, you end up, you know, you
don't, we don't, I think we

don't really like to take breaks
between stage one and stage two.

Right.

Is that, is that fair Scott?

Scott Berry: that's

very fair.

Operationally, it can be hugely
problematic if you pause enrollment

at any point in the trial.

Joe Marion: Yeah.

And so it's um, you know, I think you
need, you need a slightly different

set of calculations if you've got
one of these long-term endpoints.

The math still works that we did.

All, all of that is still fine.

It all still plays out, but
like the relative benefit of

this thing kind of changes.

Yeah.

Scott Berry: Yeah, so you, you, we've
made reference to the information

fraction, so the proportion of patients
that are in stage one as opposed to

the total, uh, size of the trial.

If now you only have your primary endpoint
on a subset of stage one patients, there

can be a different information for action
calculated, uh, within that setting.

And some patients in stage one
might not actually contribute to

that until later in the trial.

So this is something we do quite a bit.

We might employ simulation, we might
just be able to calculate the information

fraction and then evaluating whether
this is a positive thing to do and

the difference in the sample size.

There's, there's complexity
to many of these trials.

I think that's very fair.

Joe Marion: Yeah.

I think the other way to
think about it is like.

You know, you might, you're
over-enrolled in your phase, in

your stage one in some sense, right?

You have 25 patients who've
completed the endpoint, but maybe

you have another 25 outstanding.

They're not contributing
any data to the analysis.

You don't pay any penalty for not
having their data, but because,

um, but you've still enrolled them.

Right?

And so in some sense, making
stage one bigger is sort of

extra expensive in that setting.

Yeah.

Scott Berry: Yeah, I, I recently
did a project where it was a client

had two doses in a, in a control.

In stage one, they were picking one
of the two doses moving to equal

randomization, one-to-one in, in stage
two, and the, they, they had this

aspect that the endpoint was 12 weeks.

And so if they enrolled stage
one, they'd have, uh, data.

At 12 weeks there was
gonna be over enrolling.

But with two doses, suppose you over
enrolled 10 patients on each of the arms.

You've 20 of those patients
count in phase three.

And so in some sense, 10 patients are
quote unquote wasted with within that.

And of course it depends on the
length of enrollment and, and, and

the time to that which is evaluating
each one of these trial designs.

Now, the other interesting part of
it was they actually had earlier

information than 12 weeks, which was
very highly correlated to 12 weeks.

And we build that in with a longitudinal
model to make that, make that decision.

Joe Marion: Yep.

Scott Berry: Okay, so.

Now getting to where,
where Kurt wanted to go.

So seamless.

Two three top trial.

You can calculate the alpha
very straightforwardly.

It's a good thing to do.

You want to include stage one
data as part of stage two.

What that mean about the size of the
phase two part, stage one of the trial

relative to doing separate trials?

Kurt?

Kert Viele: Oh, I get called on.

Um, so one of the things that
was interesting, so we explored.

Uh, a lot of different dose responses.

So some dose responses where a
couple doses are good and the

rest of the doses are bad dose
responses that had a linear increase.

Um, different kinds of shapes.

And one thing that's was really
interesting for the separate trials is.

Depending on how the doses actually stack
up, you wanna do very different splits

in how much of your total development
is in phase two versus phase three.

You're really at risk of doing
too few or too many if you don't

get that dose response right?

If there's a different number
of doses that are good or bad.

On the other hand, if you do the
seamless, you've got the option where

you get to keep that phase two data.

So going a little bit longer,
you're not, you know, you, you do

lose something by going farther.

In phase two, you don't focus as much,
but it's a lot less than separate trials.

So it became a very robust decision that.

Somewhere in the 30 to 40% of your
total development ought to be spent

in phase two in that setting, and it
was very robust across dose responses.

It really helped people out in
sizing the trials without really

having to think about it too much.

Scott Berry: Oh, and in every single
scenario, the size of phase two,

number of doses, dose response,
the truth of the dose response,

fa, the phase two part is bigger.

If you do seamless trials.

Lindsay Berry: Yeah, the way we approached
this was we identified for the seamless

development program and the separate
development program, which, um, how,

what is the optimal way to allocate your
total number of patients between stages

in terms of, uh, um, maximizing power.

So, um, you're right, yeah.

In the seamless designs.

The way to allocate your patients was
to put more in phase two to optimize

power compared to the separate designs.

Scott Berry: So another part of
that is that by having a larger

phase two, you're more likely to
pick the better arm in the setting.

Okay, so.

Joe Marion: And you get to keep
the data from that arm, right?

So if you're worried about like
penalties, you're actually like,

you know, you're starting with
your best, your best data set.

Rather than having to start
all the way over again, so

Scott Berry: so, um,
easy to calculate Alpha.

Phase two is bigger.

We're more likely to get the better dose,
which by the way doesn't just affect

stage two and power, but the eventual
approval of the drug, the dose used that.

This is a thing that I
think everybody would want.

Is bigger.

Phase two, when you run separate
trials, it's harder because you gotta

get out and you gotta get to phase
three 'cause patent life is going.

But here, when those patients included
in phase three, and by the way,

you're losing the white space between
trials, huge advantages of this.

You're also getting a better
dose, which, which is good.

There's a lot of win, win, win in the
seamless two three trial design scenario.

Okay.

Now returning to an initial question
where I, all I said was, you know, this

closed testing procedure is awkward.

Uh, within it, did we, we compared
the group sequential alpha calculation

to the closed testing procedure.

And what did we find, Joe?

Joe Marion: I actually.

Lindsay,

Scott Berry: I, yeah.

Lindsay Berry: Yeah, we did.

So we added,

Kert Viele: We as said, Lindsay did.

Scott Berry: Yeah.

Joe Marion: I think we looked up
some of the math, but, but yeah.

Lindsay, yeah.

Lindsay Berry: Yeah, so we did,
we implemented the closed testing

procedure in one specific way.

And I wanna say that there's,
there's probably many ways you

could, you know, figure out the
details of how to implement this.

So this is just what we found,
um, in our implementation.

Um, the results overall were similar,
um, but in some scenarios, the.

Closed testing procedure had lower
power than the seamless design with

this group sequential math that,
that Joe explained at the beginning.

And it was particularly evident in
scenarios when one or very few doses

had an effect, you'd see the power
of the closed testing procedure drop.

Um, and I think that's just the.

Uh, a reflection of how the closed
testing procedure works, where you

have to test other hypotheses first
before you can reject the null

hypothesis on the selected arm.

So that sort of blocks, um, the ability
to reject the null hypothesis and

lowers your power in this scenario
where only one dose actually works.

Scott Berry: So it was a bit
conservative in type one error

in some of these scenarios.

It had lower power in there, so also it,
it, it does very well in terms of power

and, and better than some of the, the,
the general closed testing procedure, a

aspects of it.

Joe Marion: there's a weird thing
about the closed testing procedure

too, where, I mean, like, what is your
estimate or your, your estimate, right?

Like, you know, it's, I, I guess it's
probably the mean from the pooled

stages, but you don't, that's not
actually where you got your p-value from.

Um, I assume there's no, like,
assume it's, you know, you're

unlikely to have like a weird case
where your pooled analysis doesn't,

you know, meet significance.

But it's just kind of weird
to me to have your like.

Your test and your model kind of
divorced from each other in this way?

Yeah.

Scott Berry: So that brings out.

In number of designs we do, people working
on phase two, three seamless trials.

A good number of them
fit into this bucket.

We put things and we made it very simple.

Fix sample size, stage one, fix sample
size, stage two, select a dose, and

move forward a number of the, the, these
designs end up with innovations to them.

Adaptive things happening in,
in the stage one part, you could

potentially drop some doses sooner.

You could potentially
do RAR on those doses.

You could have adaptive sample size
of stage two part, and then in the

stage, that was the stage one part.

I knew I would mess up
phase two and stage one.

Um, so stage one can
have various adaptations.

Stage two.

Can have adaptations mostly
probably in sample size.

Maybe you could stop enrolling sooner.

The effect is larger, flexible
sample size within those parts.

So that, that's where this gets kind
of neat to, to, to use this and add

adaptations, uh, uh, to both parts
of this and within seamless trials.

Kert Viele: So, you know,
there's a couple parts to that.

You know, one, one question is, well,
if you're gonna be complicated, what?

Why worry about this result?

You're gonna have to
simulate everything I.

But I do think we have a lot of
clients who come to us and that

one of their questions is, should
I even pursue a seamless trial?

And the ability to go,
Hey, here's the basics.

Here's a simple one, and
here's what you gain.

It's worth making that kind of
broad discussion over whether it's

worth investigating things further.

Usually the answer is yes, it's
worth investigating things further.

So, but that's certainly there,
there's value on both sides of that.

Even if you move away from where
the results directly applicable,

Joe Marion: Yeah, I, I think there
are also a lot of interesting

situations too, where the result
is like conservative and so you can

still apply it even though you're not
really following the like underlying.

Like procedure that's
described in the paper.

So, uh, Scott was talking about having
a longitudinal model earlier, right?

To use your early week data, you
know, your week 12, to predict

your week 52, for example.

And you, you can apply this model in
that s you can apply this correction.

In that situation, what you do is you,
you treat the information, you calculate

the information infraction, assuming
you had all the data at the time of the

interim, and that'll give you a penalty.

That's kind of too big.

But it is conservative.

It gives you analytic
control type one error.

You don't have to simulate
if you don't want to.

And honestly, I found that that
works really well and is like.

Absolutely acceptable by
regulators also applies to like

the besian models we do, right?

Um, you may have a dose selection
criteria coming from sort of

a complicated besian model.

You may not be choosing the best
dose, you may be choosing the ED 90.

You can still apply this procedure
in those settings and it will be

conservative and that's really helpful.

Um, I'm always happy when I can
move past like simulation based

control type one error, right?

And into.

And, and have that part of the,
the puzzle solved so I can focus

on the other pieces of the design.

Scott Berry: Yeah.

So in in the scenarios, you might want
to pick the dose that that is, is optimal

over tolerability and safety at the same

time.

But the selection of any dose would be
controlled under this procedure would only

be cases such as we're gonna take the.

Uh, middle dose, no.

As long as it's, or sorry, the
high dose, unless it's unsafe and

if based on safety, it's unsafe,
we're gonna take the lower dose.

That's a case where you probably
don't even have to split alpha in a

circumstances like that.

It's not efficacy that's selecting dose,
but largely in the other circumstances,

we, we've been able to, to, I employ that.

So not every scenario should do a seamless
two three trial, though the, the math

is great and almost, you should almost
have to figure out why you're not doing

it, but what are some areas where it
might not be the right thing for them

to do, to do Seamless two, three trials.

Kert Viele: You have to use different
endpoints in phase two and phase three.

We don't like it, but it would break this.

Scott Berry: So one scenario might be
that it's a relatively new disease,

not a rich history of knowing even
what the phase three endpoint would be.

Sometimes the FDA says, bring us the phase
two and we'll figure out the endpoint.

It would be hard to have a seamless
trial, uh uh, in that setting.

Not impossible, but that might be a hard
setting where the regulatory pathway of

phase three is so unclear that you kind
of need phase two to to do that is is one

scenario

Lindsay Berry: yeah, I think

Scott Berry: I.

Lindsay Berry: in general, to do a
seamless two, three trial, you need

to, before the trial when you're
designing it, be able to commit to.

A method for selecting a dose,
so an endpoint and you know

which dose you wanna select.

And sometimes it might not be, you
might not know how to do that yet,

and you might wanna see all of the
data and look at all of the endpoints

and kind of play around in the data
and pick the endpoint yourself.

So I think that might be a case where
being able to look at the phase two

data before you design your phase
three trial, uh, might be preferable.

Kert Viele: But I think
it's always good to.

Scott Berry: where.

I might try to push back and say,
okay, you, you're losing 200 patients.

Uh, you know, and think hard
about the dis decisions.

But, but there are some circumstances
where it, it's really hard.

Uh, another circumstances may
involve just CMC issues that

the, the method for creating the
drug is not phase three ready.

And the FDA wants you
to be using the drug.

That's phase three ready.

The manufacturing is set up in a way
and so that might be a case where you

can't use the phase two data because
the manufacturing isn't even part of it.

You do operationally seamless potentially.

There, there, there's a, but that's
a scenario where I've seen that

that's a bit, a bit of a hurdle.

The other

one I was spend a little bit of time on.

Go ahead Joe.

Joe Marion: Scott, is there a
situation where you have approval to

like study the drug over say 26 weeks
of exposure, something like that.

But the FDA hasn't approved you to like.

Look at 52 weeks or 104 weeks, your
actual primary endpoint is, and so

you really can't even collect the
data you need in the phase two.

Scott Berry: Yeah.

and, and it might be you don't have,
uh, you don't have the data enough to

give a year of exposure, but you can do.

Uh, you know, 12 weeks of exposure,

but your endpoint is 12

Joe Marion: step.

Yeah.

Scott Berry: Uh, or you don't want to
put in the resources to do a 12 month

study before you get proof of concept in
sort of a, a, a, a phase two trial, and

you're only gonna get 12 weeks exposure.

Uh, might be a scenario, um, in
that, by the way, all of these

trials seamless, two three trials.

That we do, you put a proof of concept.

You, you, you, you need to jump
a particular hurdle of reasonable

efficacy or you don't even enact
the, the, the stage two part.

So you can build in proof
of concept as part of this.

And that brings me to the, the, the
other one that we do a fair amount

of work with smaller biotechs and
a funding becomes a big issue.

And the, the notion is that
we, we don't have the money.

To run the phase three part,
we need the phase two data

to raise the money for phase.

Three.

We need that phase two data for that.

And so we need to make that data public
and go to a conference and say, look

how good the data are in that setting.

And if it's part of a phase three
trial and seamless phase two three,

that, that's generally frowned upon
that you unblind data as part of that.

So that could be a potential restriction.

It's something we've

Kert Viele: Scott.

this would, that would apply to NIH
funding as well, where you need the

phase two data to get the grant for phase

Scott Berry: Yep.

Now we do a lot of work with biotechs
that we try to set up objective criteria

in phase two, that it won't go to phase
three unless a certain effect size

is seen, a probability of clinical
significant effect sizes seen, and you

could potentially set up with a funder.

That if that comes back positive and
all you get back is, did we achieve

the proof of concept threshold?

Yes or no?

And that's done by an independent
stack group, and they provide that.

The funder says, yes, if you can hit
that we'll, we'll, we'll help fund it.

The benefits are smaller,
sample size, shorter timeline.

It's good for the funders in that
circumstance and just for the whole

reasons that this is a better approach.

So we've done a good bit of that, but that
could be a constraint in a number of areas

where largely regulators think that that.

Part one is really part of a phase
three trial 'cause the patients

are so that could be a potential
constraint in all of this.

Joe Marion: those, those proof of concept
thresholds are really important, but

they're tricky because they kind of change
the way the trial looks in a lot of ways.

If there's a pretty high hurdle, you
have to jump from phase, from stage one

to stage two, that's gonna reduce, you
know, the thing you, you normally would

describe as like the power of the design
because, you know, to win the design, it's

not just meeting a significant threshold
at the end of it, but it's also jumping

over that hurdle in the middle of it.

And I think that's, this is like
always a part of clinical trial design.

It's like part of every drug
development program, but it, it's

kind of hidden if you don't do a
seamless design because no one.

Specifies really what that hurdle is.

You don't ever see it.

You know, people are always
calculating power for their phase

three after they've hit the hurdle.

Um, and so it, it's kind of tough
to compare these, this is a really

important part of doing these designs,
but it, it can lead to some comparisons

that people find confusing, I think.

Yeah.

Scott Berry: Yeah.

Some of them are designed in a way that
there is a pretty high hurdle, but they're

sort of built like, boy if, but if we hit
that hurdle, we want full acceleration.

We want full goal.

If we don't hit that big
hurdle, we're happy to just

have this be a phase two trial.

And we'll analyze the data and we
might go run a phase three program.

Um, and others are set up where
it's a relatively low hurdle.

They're, they're gonna go to phase
three most of the time, uh, and set

it up almost like a futility rule.

So we've been involved
the whole gamut of that.

Um, uh, which can be customized
to the particular scenario.

The person's in.

Joe Marion: Yeah, the, the high
hurdle I think of is like a

design that's opportunity seeking.

It's like designed to go really fast
if things look well, and otherwise it

just turns into your normal phase two.

Scott Berry: Yep,

Joe Marion: Um, but you know, you
gotta get in that mindset and you

gotta convince other people that,
that that's the right mindset to have.

Scott Berry: yep.

Yep.

All right.

Any other thoughts on this?

This paper, it was one of my favorite
papers in, in terms of setting this

up, making this straightforward, but
the optimality part of it I thought

was just absolutely fascinating.

And the ramifications to
drug development also.

Uh, absolutely fascinating.

Joe Marion: Yeah, this is
one of the ones we use a lot.

Like I, I use this regularly in our
project, so that, that's been cool.

Scott Berry: Yep.

Lindsay Berry: Yeah, there's,
yeah, there's R Code.

Joe's really nice R code in the
appendix, so super easy to use.

Just plug it in and then you
can calculate it in seconds.

Uh.

Um, or you can just do what
Scott does and you can ask

Joe Marion: right seconds
would be really long.

Lindsay Berry: I was gonna say, or you can
do what Scott does and you can message me

and Joe and we'll calculate it for you.

So that's also an option.

Scott Berry: uh, and they have
passed me the code and I have

actually learned to run it.

So if I can do it, all of you can do it.

Joe Marion: Scott's just waiting for
the FORTRAN version of it, right?

Like that's, that's what we need.

Scott Berry: It will be faster.

But this code is so fast
that you wouldn't notice.

Um, within them, I'm sure.

Joe Marion: What do you
think seamless means?

Like, I think of shirts.

Is that, what is that the seam
like, you know, it's, you can't

tell where the sleeve connects to
the body or is it something else?

Scott Berry: I, I think you can't tell
where the stage one and the stage two

part, there's no seam between them.

There's no white space, which is a killer

in drug development.

So we, maybe we could call
these white space free designs.

Joe Marion: I.

like it.

Lindsay Berry: Trademark that.

Joe Marion: we should talk about blinding.

'cause that's something that comes up too.

Um, like I've had people claim that
you can't do these designs because

the decision to go from, I've had,
I've had people claim that the sponsor

isn't allowed to know when the design
enters stage two, which is not true.

I think it's crazy.

I, is that, is that the consensus?

Kert Viele: We certainly have run 'em.

Scott Berry: right.

Kert Viele: So we've, we've
run them at least a decade.

Right Scott?

Scott Berry: Yeah.

Yeah.

And,

and they, there, there's actually
a fair amount of information.

I think what gets everybody nervous is if
you were to unblind any individual patient

and you know, John Doe is on placebo
or not, that this can be problematic.

I mean, you, you.

They, they, they still have outcomes
to in the trial or if you know

exactly the effect size, there's
concerns about operational bias.

And, and the FDA guidance on
adaptive trial spends about half

of its time on operational bias.

Does it change the patient's enrolled?

Do the clinical sites, uh, uh, differ
in the different parts of the trial?

But you can get a certain information.

You know, you hit proof of concept.

Uh, you know that the effect was
above, uh, that, and you know,

what doses selected even in

part two in that, and it might
spawn another phase three trial.

So this might be the first of your two
adequate and well controlled, and now you

need to know the dose for the other phase
three trial, even sizing the other trial.

So there's a fair amount that gets known,
but there is a line where you don't want

to cross in terms of what you learn.

At that particular time,

Kert Viele: And there's an argument
there to get a little wonky in terms of

the keeping the ratio between the arms
and phase two and phase three equal.

So even if you get to the end, even
if there are slight differences,

the control and treatment I.

There are no systematic differences
between arms, which is important

for, uh, interpretability.

Joe Marion: Yeah, and from a patient
perspective, that means you might go

from randomizing like three to treatment,
to one, to placebo, to one to one.

In the, in the, in the last part.

And that's, you know, something to think

Scott Berry: we've done some where
it's, it's one to one to one in the

first part, and then it's two to one.

So you maintain the placebo proportion
throughout, and the formulas in this

paper will allow you to do that.

Joe Marion: Mm-hmm.

Oh, oh, and then the other thing I,
I'm curious, you know what, if you

don't wanna really pre-specify the
dose selection criteria at the interim.

But you wanna let like a really small
unblinded group within the company

see, like, I don't know, a couple of
efficacy endpoints summarized by arm

and maybe the safety data and make
that decision kind of manual mode.

Is that, is that an option?

Have, have we tried that?

Kert Viele: I, I mean, I'm gonna,
I'm gonna think out loud here.

Shouldn't that be allowed
from, at least the math.

I mean, we're picking the
best arm, so picking anything

Joe Marion: it's good with the math.

Kert Viele: So yeah, so the math

Scott Berry: So let's
separate two scenarios.

There's scenarios where A-D-S-M-B picks
the dose and the trial sets up that

we're gonna allow them to pick it.

Then there's no concern about
unblinding and the math all works.

You can pick any dose and the math works.

I think the scenario is, does somebody
at the company know something and I.

Uh, you know, there are no
written rules about this.

That's, that's a bit of the challenge.

There are scenarios where somebody
very high up in the company, A CFO,

A CMO, who doesn't talk to the sites,
is not involved in any way, executive

level that's involved in that.

And I I have seen that.

So there are ways there can
be some level of unblinding.

Uh, uh, to the company and all of
that, but you want to really keep it

separate from sites involvement with
anybody who's talking to patients to, to

really make sure that this is, um, you
know, it's hard to recover if somebody

says, do you have operational bias?

You can't

show.

You didn't.

Kert Viele: so you may, you're still
double blinded, but maybe not triple

or quadruple or quint blinded.

Lindsay Berry: Yeah, and the math
in that situation works out for

the type one error, you know,
because it's conservative, I guess.

But you can't really simulate
that decision for power.

You can't say, here's how accurate
our dose selection is if you don't

have the algorithm pre-specified.

So that maybe is a
little more challenging.

Joe Marion: That's a good point.

Scott Berry: All right, so thank
you all for joining here in this

strange place called In the Interim.

For now, thank you all for joining.

Lindsay Berry: Thanks.

Kert Viele: Scott.

Joe Marion: Thanks for having us on Scott.

Scott Berry: I.

View episode details


Subscribe

Listen to In the Interim... using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes · Next →