· 39:39
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: Well, welcome back
everyone to in the interim,
I'm your host, Scott Perry.
So welcome back.
Um, for those of you who have joined
multiple times, I hope you still have
enough alpha to keep joining us here.
In the interim, uh, I of course make
jokes, but that is related to today's
topic, so I'm going solo today.
And by, oh, by the way, I've had people,
uh, if, if you can see the video, uh,
of me, you can see in my background, I
have this beautiful, uh, view of trees.
It looks like I'm in a tree house.
I'm actually at my cabin.
So we have a cabin in
near Brainerd, Minnesota.
So I grew up in Minnesota and, uh, many
people in Minnesota have their little
cabin on a lake somewhere, and we spend
about three months, uh, somewhere three
to four months up here in Minnesota.
We get outta the heat of Austin, Texas.
In the summer.
We get to enjoy this beautiful,
uh, Minnesota summer, uh, without,
without having the Minnesota
winter or the Austin summer.
We kind of get the bo best of both.
We are, we are approaching the
time at the end of the year
that we travel back to Austin.
It'll be in a couple weeks.
We travel back to Austin.
So you, you won't see this
view of my, my tree house.
I'm in the, essentially in a little
room above the garage here in the cabin,
uh, where I do my work in the podcast.
And we have this beautiful, beautiful
set of trees, uh, out the window.
It's gorgeous.
We're about to embark on this trip
back from Minnesota to Austin, and
we have a big dog, uh, a Bernice
Mountain dog poodle mix, a bernedoodle.
He is about 110 pounds, so of
course we drive back and forth
each way, so we drive up here.
We drive back, I've even started to
two days is just kind of, uh, brutal.
Uh, it's about 20 hours, 19 hours total.
So I've started to try
to do that in one day.
Uh, my wife won't do it with me, but,
um, um, the dog and I jump in the car
and we leave at four and we get in,
you know, 10, 11 o'clock at night.
I think we're headed back.
I think we're gonna do it two days,
but I wanna sort of set up that when
we do this, there's multiple cities
that we drive through, so we're
a couple hours from Minneapolis.
There's multiple ways in
and around Minneapolis.
And the traffic in Minneapolis St.
Paul has been, uh, has been challenging.
It's it's construction season, uh, as
every summer is here in, in Minnesota.
We drive through Des Moines.
That's usually not too bad, but Kansas
City has multiple routes around it that
you can take depending on the traffic.
Same with, uh, Oklahoma City.
And the, the, the, the biggest
hurdle is Dallas-Fort Worth.
Depending on the timing, we go through it.
And there are many ways in
and around Dallas-Fort Worth,
depending on the traffic.
So we get ready to drive out
and we're thinking there's
multiple routes we can go.
And of course, we're aided
by our, our, uh, maps.
Um, and we can get updated, uh,
views of, of traffic and, and,
and how long each one's gonna go.
So do I decide on how I'm gonna get
around Minneapolis when I drive out
of our cabin here in Brainerd, I'm
still, you know, two hours plus away.
Do I decide how I'm going around Dallas
Fort Worth when I go through Oklahoma
City, where I'm still a couple hours?
It seems like that would be absolutely
nuts to make that decision, especially
when I have an app that gives me
the most current view of traffic.
I'm gonna make that decision.
When I need to make that decision as
close as I can to making that decision,
and I'm gonna, it's gonna provide me the
most efficient way home, uh, which, which
for me is the shortest route in that.
And forcing myself to make that decision
two hours earlier or, you know, five
hours earlier, uh, seems to be bizarre.
Okay.
So let's jump in.
Today's topic and today's topic
is the promising zone design.
It is, you know, here at Berry
Consultants we're in the interim.
We, we, we do adaptive designs.
It's, it's really our expertise.
Uh, I've been doing this for 25 years and
the promising zone is an adaptive design.
It's an adaptive sample size approach.
So let's talk about the promising zone.
Uh, it's actually a beautiful,
neat mathematical result.
The question, is that mathematical
result tied to a design?
Is it good?
Does it work well?
So let's get into that.
So let's, and one of the challenges,
and it's sort of fun in doing this
podcast, is I can't use slots.
Now I think there's probably a
functionality where I can put
slides out there and all that, but
you don't want, you don't want a,
um, uh, slides and all of that.
So some of you are out for a jog.
Some of you are driving
to work in the morning.
You're consuming this in
different ways, and so I sort
of have to describe this to you.
So I'm gonna walk through a little
bit of this with, with no slide.
Uh, which is, which is a
challenge, but um, uh, I think
is a good challenge actually.
And something to think about when you
give a talk at a conference is can
you do that presentation where slides
are an aid to you but not a crutch?
So I'll try to do that.
I may fail.
So let's the simplest of designs
where this is adaptive sample size.
We're running a phase three trial.
Wow.
We have a standard one-sided 0.025
and, uh, you know, I've said this
on multiple podcasts, but, uh,
a two-sided superiority trial
is just strange, uh, in this.
So we'll continue to talk
about one sided alpha of 0.025.
Now, um, I'm gonna set this up within
the context of a test of a mean.
So we're doing a, at the end of the
day, we're doing a T-test in Ancova,
an MMRM, where we're doing a test
of means and a higher mean is good.
We will think of a standard
deviation of 12, just so I can
have a range of effect sizes.
And in this scenario, let's sort
of identify that a mean change
of four points on this endpoint
is really where I'm targeting.
I, I, I think in my phase two trials
that we might be as good as five.
We may even be north of that,
but I, I really want to make
sure I'm powered for four.
You know, three is clinically
reasonable in this setting.
If I, if I can do three,
that's a marketable drug
that's gonna help patients.
I, I think it's approvable, but it's
a big trial, five to power three.
It's a bigger trial.
And so that, that's where I'm stuck
with, and a lot of companies are stuck
with this sort of exact scenario.
I, I might be five, I might be standing
on a drug that has a mean change of
five in my population, enrolling.
So one-to-one trial, straightforward.
So in that, uh, a, a sta a fairly
standard thing is I can think of okay,
that that affects size of four to
be powered for four with a standard
deviation of 12, one to one randomized.
I'm slightly over 80%
powered if I do 300 patients.
150 and 150, slightly over 80%
powered, very standard decision
that a sponsor can make to run a
fixed sample size 300 patient trial.
Now the question is, can, can we do
better that than that with an adaptive
sample size if we did a 500 patient trial?
I'd be 80% powered for
the effect of three.
That slightly smaller effect if I run
500, but I don't wanna run a 500 patient
trial because I think I'm north of that.
But if it's three, yes, I, I, I wish
I would've run a 500 patient trial.
Now, I could run a 200 patient
trial and I'd be about 84% powered
for an effect size of five.
So I, I could, I could run
a trial anywhere in there.
200, 300, 500.
So can we do an adaptive sample size?
And maybe this is the perfect
scenario for a promising zone design.
Uh, and what we'll come back to sort
of different ways to view this, but
what is the promising zone design?
This was, uh, na ap aptly named
and we'll come back to, to, um,
uh, the name Promising Zone Design.
It's, it's, it's a beautiful name.
But it's a beautiful
mathematical result as well.
So, uh, meta and Pocock 2000, uh, 10
article in Statistics in Medicine.
Name.
The design that uses this and the
beautiful, clever mathematical
trick here is they, they have a way
in which they don't spend alpha.
And it's a really neat, uh,
it's a powerful result to think
about in adaptive designs.
So what, what is the result?
The result is, and I want you to
envision a sort of two by two table
and it exemplifies that looking
at data doesn't cause alpha.
It's the actions you take,
depending on the data that can, can
trigger the need to adjust Alpha.
And that sort of two
by two table is on one.
The columns here is that the data
are good or the data are bad.
And I'll come back to what,
what exactly that means.
But think about good data at an
interim and bad data at an interim.
And then the different sort of actions
you can take are making the trial
smaller and making the trial bigger.
So that's my two by two tape
matrix I'm looking at here.
When the data are bad and you make the
trial smaller, usually that's futility.
So I, I do an interim
and the data are bad.
I'll come back to bad and good, and
that's the sort of clever trick of
the promising zone design is when
the data are bad and I make the trial
smaller, I typically stop for futility.
You, you lower.
The overall probability of a type one
error in the trial because largely you
can't win that trial in that scenario.
So things that lessen the sample
size in that case are going
to deflate type one error.
Nothing you do in that realm.
Inflates type one error, looking
at the data and doing futility
decreases type one error.
Now if you make the trial smaller,
when data are good, this is
really sort of group sequential.
If the data are strong enough and we
stop the trial, what people refer to
as early, and I think the terminology
is you, you, you declare superiority.
That's the opposite sort of action
you take when the, when the data
are good, making it smaller,
that can trigger inflating type
one error without adjustment.
So we, when we're gonna do that,
we're, we're gonna make it smaller.
When the data are good, we adjust alpha.
So that's, that's making a trial smaller.
In some cases you don't
inflate type one error.
In some cases you do
inflate type one error.
When, um, uh, when I make a trial
bigger, that's, that's somewhat
the opposite of that group.
Sequential type triggering
calling superiority or futility.
So suppose the data are good.
Uh, let's do the other one.
Suppose the data are bad and
I make the trial bigger, that
actually inflates type one error.
And it's the circumstance.
If I was originally planning a trial
of 300 and my data are bad and I
say, you know, let's go to 500.
That is something where
you're, you're, you're, you're.
Taking a bigger sample size
because the data are bad.
You're trying to wash out the, the
negative data you have now with a bigger
sample size, that inflates type one error.
Now, when the data are good and
I make the trial bigger, you
do not inflate type one error.
Now, one of the actions that
fall into this bucket is
response adaptive randomization.
When you have multiple arms in a trial
and you increase the randomization to
an arm that's doing better, you have
an effective sample size that's bigger.
You're you.
When data are good, you're giving
it, we want more data on that arm.
You actually decrease type one error.
Now that's a, that's, that's a
multi-arm trial scenario here.
And in this case, uh,
that's not a two arm trial.
That's a multiple arm scenario.
But the other part where this happens,
where the data are good and you make
the trial bigger is the promising zone.
Now that's the neat little trick
about the promising zone is when the
data are good and you make the trial
bigger, you don't need to adjust alpha.
Now I said, what does good and bad mean?
It's a neat result.
Where is, suppose I'm going
after a trial that's 300 patients
and I look at the data at 200.
What does good and bad mean?
Largely, if I am trending to a
successful trial at 300, meaning
my conditional power is above 0.5,
that if I do exactly where I am or
a little bit better, so on my effect
size is what I need to win or bigger,
and I increase the sample size.
There's no need to adjust alpha.
Because I'm in this realm of my data
are good and I make the trial bigger.
It's kind of the opposite
of group sequential.
When the data are good,
you make the trial smaller.
It needs adjustment.
So that line of good and bad
is a 50% conditional power.
Now you can, you can lower that a
little bit and the paper talks a lot
about defining good and bad depending
on when you look and when the 300 is.
So they give an example where they
can do a cutoff at 37% and that
defines good and bad and no, no
adjustment to alpha as needed.
As long as we're 0.37
or better, we could increase 300 to 500.
And you don't adjust.
Alpha's really neat sort of, uh, summary
of adaptive designs and spend of alpha.
It's a really neat mathematical result.
So what they do is they, they
pack that into a design and they
call it the promising zone design.
Now, I'm gonna simplify it a little
bit, but they talk the, the general
concept of this design is, okay, I'm
originally targeting 300 and I have 0.025
at 300.
Remember, that gave us 80% power for an
effect size of four that we're targeting.
Now, in order to use this
result, I have to go backwards
in time to an earlier time.
I can't use this at 300.
Because at 300, my conditional
power at 300 is either one or zero.
So I have to do my look for adjusting
sample size earlier, like 200 so that
I can use this neat mathematical trick.
So I go back and I do an interim at two.
Yes, you can do a superiority
analysis at that point, say, and
they do talk about spending 0.001
alpha for superiority there.
That's your choice to do that.
But at 200, I look to see if my
conditional power is above 0.5
and let's, let's say we can
calculate exactly the threshold
in the integral and it's 0.4.
So 0.4
or above, I can increase my sample size
from 300 to anything I want, uh, to three
50 to 400, 500 without adjusting alpha.
So let's say we can adjust it to 500.
They talk about the case that the data
at that time point triggers exactly what
your sample size is between 300 and 500.
Turns out it very rarely picks
a sample size between there.
It almost always goes to the biggest.
You'll allow it to do.
It also can be unblinding.
If the sample size is three 40, you
know exactly what the effect size
was 'cause of the conditional power.
So let's simplify it in
this case to be that at.
If the conditional power is above 0.4,
we can increase the sample size to 500,
and our analysis at 500 uses 0.025.
So we do this spend free.
The crux of it is.
The mathematical trick tells me when I can
increase my sample size, and it's this 0.4
or above, and the design usually
does something like from 0.4
to 0.9.
In conditional power, I'm going
to increase the sample size
to 500, but if it's above 0.9,
I'm really liking my
chances of winning at 300.
I'm gonna go with 300.
I'm not going to increase it
because my conditional power's high.
This neat mathematical trick allows
me to do this without any alpha spend.
So the, the promising zone
design is do an interim at 200.
If my p value at that
time's significant at 0.001,
I call significant.
My trial's over.
If my conditional power is above 0.9,
but I haven't won yet,
I'm gonna stick with 300.
If my conditional powers between 0.4
and 0.9,
I'm gonna increase from 300 to 500,
giving me that additional power.
Now, if my conditional
powers less than 0.4,
I can't increase that was this
other place where my data are bad.
Bad is, is a, is a bad term, uh,
for that, where it just largely
means my conditional powers in this
region where if I want to increase
sample size, I need to adjust alpha.
And that's sort of, uh, we're not doing
that in the promising zone design.
There's this, I'm gonna say I
think it's a bad approach, but
we don't want to spend alpha.
Which I think is not good design.
Uh, your, your role is not to preserve
your alpha, your role is to create
a good design, but that, that's
the, the premise of this design, and
it's why it's attractive to people,
is you don't actually spend alpha.
So if it's 0.4
or lower, I have to stick with my 300.
Knowing that my chances of success
are reasonably small, and boy, if my
power, if my conditional power was 0.3,
that's the case where I would
love to increase to 500.
But the design, the mathematical
trick doesn't allow me to do it.
This mathematical trick is making
my drug development decisions
and me personally, that makes me
uncomfortable, um, in that scenario.
But how does this work?
How does this design work?
So that's the design that I described.
Uh, you could embed futility in there
and I'll, I'll leave that aside for now.
Uh, I think that's a very good
idea with, within this scenario.
So in this design, when you employ
that design, let's think about
what happens in the trial at 200.
You do this interim largely.
If your effect size is
sort of north of about 5.5,
remember if, if, if it's um, uh,
four has a power of 80% at that
300, but if I'm north of about 5.5,
I'm gonna win with the
group sequential 0.001.
Oh, by the way, I said 0.025
alpha because of that 0.001,
it's 0.0249,
um, uh, in that case,
but largely it's 0.025
in that setting.
So if I get to 200 and my
effect size is above 5.5,
largely, I, I've already
won the trial and it's over.
Now, if it's in that
rage region of about 5.5,
down to about four.
In that case, I'm trending with
90% conditional power and above.
I'm gonna stick with 300 in that case.
So my conditional power of, of 0.9
is about effect size of four, uh, in that
case, which as originally powering for.
So if I, if I end up north of that, I'm
really trending to a successful trial.
Remember 200 patients into my
trial, I have a mean in the bank.
A four or above gives me a conditional
power with only a hundred patients to
go that I'm going to win the trial.
And, and remember, I'm
powering at five, at 300.
If I see something north of three,
I'm gonna win the trial at at 300.
Okay?
Now the promising zone is labeled
as that region where I'm between 0.4
and 0.9
in conditional power.
When it falls in that case,
those are effect sizes.
Just below this, I need three to win
sort of thing, and it's, it's about 2.7
or above where that, where I'm gonna
have 40% chance of winning at 300, 2.75,
let's call it.
So 2.75.
Up to four is the defined promising zone.
That's the case where I'm free to
increase to 500, so I'm gonna go to 500.
In that case, if my effect is 2.7
or less, I have to stay at 300, so
I'm gonna go to 300 in the trial.
That's what the design does.
That's what it looks like
in the effect size of this.
Now, what are the ramifications of this?
First of all, I want you to
think about trials that go
forward into the promising zone.
Have conditional power at
300 to win between, uh, 0.4
and 0.9.
Most of those trials would've won at 300.
More than 50% would've won at 300.
So when you go to the promising zone and
you go to 500, it's gonna increase that.
So you simulate exactly that design.
Remember our old case of four
that had power of about 82%?
That goes up about 4%.
In the case of four, we
increase our power by about 4%.
We also, I, I, I'll sort of stop there.
Our sample size.
Our average sample size increases from
300 in the fixed sample size to 342.
That's, that's not a sort of small
increase, but if you ran a trial
of 342 patients, you would get the
same four ish percent increase,
four to five increase in power.
Is that efficient?
No, it's not very efficient.
Yes, it's better than the fixed design
and power, but not really in efficiency.
It's a, it's a marginal benefit.
In the case of four, now you're
80% powered, and now you're,
you're increasing that four or 5%.
So I'm not gonna say that's bad in that.
What about that case of three?
Three was the case that we were
lingering just below 60% power and
500 was gonna give us, in that case,
it was gonna give us 80% power.
If we ran 500, the power in that case
increases about 6%, and our sample
size increases by 55 on average, 355.
We also in, in increase about, there's
about a 2% chance that you lose the trial
at 500 that you would've won of 300.
That that can happen.
If you just would've gone
to 300, you would've won.
But because you went to the end, it
actually ended up, uh, getting worse.
So you get this sort of plus 5%, but
you lose kind of 2%, I'm sorry, plus 6%,
but you lose 2% that you would've wanted
300, and you spend 55 patients for that.
Again, 355 patient trial has more
power than the promising zone design.
On average, if you as a company
run this promising zone over and
over, you're, you're not efficient
with the use of your patients.
Okay.
So, um, and this is a, this is sort of a
perfect case of, of the promising zone.
Is it better than a fixed trial?
I, I don't know.
I'm not so sure, uh, in that
case, but let's, uh, the, the
attraction to it is sort of this.
Bizarre allergic reaction
to spending alpha.
Again, the goal of the trial
is not to preserve your alpha.
So many companies come to us and say, oh,
we we're doing a promising zone design.
Oh, because we don't wanna spend alpha.
Well, it doesn't mean it's a good
design, it doesn't mean it's efficient.
Have you compared it to other designs?
Let's take exactly that trial and
say, under those parameters, let's
do a group sequential design.
Let's spend alpha.
Now, I should stop saying spend, because
spend sort of means like it's gone.
You allocate it across
this, you still get 2.5%
alpha.
You haven't lost any of it.
It's all there in the design.
You just use it at different
points in the design.
Hopefully in an efficient way.
We're gonna use a fairly non-controversial
Kim Deme two spending function.
And we'll use exactly the 200
case that the promising Zone
did, and then we'll do 300.
That was original sample size four.
We'll allow 400 and 500.
So here's a design that has the exact
same range, the same minimum, the
same maximum, and we'll use a Kim
Deme spending function over those.
By the way, the final
alpha at 500 is 0.0173.
And again, that gets people so
nervous that that's less than 0.025
and that's what they call the spend.
Yeah.
But it's all used.
It's you, you use all of that Alpha
200, 300, 400, you have, I, it's, it's
so strange to me that, that people
have this allergic to reaction to using
their alpha in, in the efficient ways.
So what does that design do?
Compared to the promising zone design,
fairly simple, straightforward, right?
I haven't even baked in
futility at 400, uh, sorry.
At the effect of four, remember, we're 80,
a little over 80% power for 300 patients.
The promising zone increased that,
let's call it 5% for an average
of almost 50 patients, more.
The group sequential design
at four has a power of 95%.
It goes from about 82 to 95% more than
double the increase in power at three.
At at at the effect size
original goal of four.
The average sample size is 310.
The promising zone, average
sample size in that case was 3 42.
No different than a fixed
sample size of 3 42.
The av, the, the power of 310 patients
at that case would be about 83%.
Ours is 95.
That's an efficient use
of your sample size.
If you're running five of these
trials, 10 of these trials.
Uh, if the industry's running many
of these trials, it's an incredibly
efficient use of, of the, the sample size,
patience, time, cost in that it's much
better than the promising zone design,
and it allocates alpha across these.
Remember that, that that little three,
the effect of three, that boy, we'd
love to go to 500 In that circumstance.
The power increase in that case is
more than 20% increase in power.
It almost gets to that 80% that we get.
It's, it's about 78%.
It's almost a 20% increase in power
that the promising zone increase 5%.
It and the promising zone
increased it by 55 patients.
This increases it to 3 83, but
you get a huge increase in power.
You almost get the power of a 500
patient trial for an increase of 83.
Again, incredibly efficient use of alpha.
Same thing happens for an effect size
even of two in that circumstance.
Of course, the other beautiful thing
is your, your effect size is five.
Your power is almost a hundred
percent, and your average sample
size is two 50 because you win at
the 200 patient look 60% of the time.
Again, it's incredibly efficient.
Use the group Sequential design
dominates the promising zone design.
I, I don't care if you spent
Alpha it, it just doesn't matter.
When you simulate and compare that DI
design to the promising zone, there's
no way you'd pick the promising zone.
Okay, now, um.
Embedding within that, people get
nervous Well, uh, you might go to 500.
You need to embed futility within that.
Add futility rules at
the 200, 300, 400, 500.
Simulate it, optimize
it in those scenarios.
Create a much more efficient design.
The amazing thing about the group
sequential design, and, and by the
way, I'm a card carrying Bayesian.
Uh, but I live in the world of type
1 error in phase III and I, I did
that discussion with Frank Harrell.
He gets mad at us for doing this,
but it's the world we live in and
we do group sequential designs.
We explore designs and look at the OPAL
So I'm not somebody standing up here
saying frequentist is great and all that,
but the design is pretty darn efficient
in the world where you have to control
type 1 error It's really darn efficient.
In that, those circumstances, uh, within
that, so the, the group sequential design
can be incredibly efficient use of it,
tied with futility, tied with simulation.
Compare it, uh, in those cases, we,
we get so many clients coming in
that, that don't wanna spend the alpha
and they write in a promising zone.
And it seems like this amazing thing.
Oh, sorry.
I what I was gonna say.
When you do this group sequential
design at 200, where again you win
north of five, you stop for futility
based on some point where you
decide as the sponsor where that is.
You don't use the mathematical
trick to say it's 0.4.
You decide at what values
you would like to keep going.
In those circumstances, it
creates a promising zone between
your futility and we won.
That's the region we'd like to keep going.
And then at 300, it creates another
one, another promising zone.
The difference in the designs is you
make the decision of whether 300 is the
right sample size when you get to 300.
You don't have this weird mathematical
trick say, you know, we need to go
back to 200 to decide if 300 is the
right sample size, because there's
this cool mathematical trick about good
and bad data and spending alpha, the
promising zone's been out there and
it it, it's a siren song for people.
Oh my God.
There's this adaptive sample size
approach that doesn't spend alpha.
It's a siren song for people.
Simulate the design, compare it to a
fixed design, compare it to a group
sequential design that allocates Alpha.
We at Berry have never once had a
scenario where a sponsor comes in
where the promising zone design is
better than a group sequential design.
Zero.
If you have one out there, show it to me.
Uh, I, I, I, I'd love to see it.
I'd, I'd love to see
this rare bird out there.
Now, interestingly, of course, in the
whole thing, and it, it's more than I can
do in this, this, uh, this episode, but
I, that was a circumstance where you had
complete information on every patient.
It.
Now, most trials don't have that.
Our endpoint is 12 months.
Our endpoint is six months.
Our endpoint is 90 day modified ranking.
Uh, a number of scenarios that when
we do our interim, we have a number of
patients with incomplete information.
You can do group sequential designs
with incomplete information now.
We call those Goldilocks designs.
And you can look up, um, uh, Broley
O'Connor and Barry Paper on, uh,
Goldilocks design, where largely you're
employing group sequential techniques.
When you get to 300, you make
the decision is 300, right?
You don't know the final answer yet.
You don't know your p value.
You can use conditional power, predictive
power to say, my probability's high.
I'm gonna win right here.
And then I stop enrolling.
I get complete data, and I do my P-value
calculation With incomplete data.
That's still better than the promising
zone because remember the promising
zone goes back to 200 and says, I
now need to forecast 300, but it has
incomplete information because at 200 it
is also dealing with the same problem.
In all of those same circumstances, the
promising zone has never been better than
a Goldilocks group sequential type design.
Now, it, it's, it's, it's one
of these circumstances that math
triggers the design, and that just
makes me incredibly uncomfortable.
This neat mathematical result
that you don't inflate.
Type one error at 0.37
or 0.4
or 0.43
should not be your drug
development decision.
Don't force a decision earlier to do
this neat little mathematical trick.
Don't make your decision about
where you're gonna go around
Minneapolis two hours earlier.
Now it's not all that different
of a scenario because we
have incomplete information.
So when I get to the top
part of Minneapolis, I have
to decide 4 94 to the west.
I can go 94 through the city.
I can go a little bit east of that.
Um.
Now I have incomplete information when
I get there, so right about the, the
point I have to make this decision.
The app is gonna tell me the relative time
for each of these, and ru and, and some
rush hours, rush hour time and everything.
Think you're pretty bad in Minneapolis
in that, so I'm still dealing with
incomplete information, but I'd rather
do it close to the time I have to
make the decision rather than when
I leave my house two hours earlier.
It just, it, it, it can't be
possible that I can make better
decision two hours earlier.
And in this adaptive sample
size thing, that's what is done
in the promising zone design.
Sure, it's better than a fixed sample
size, but once you, once you get
into this world and say, I am in a
scenario where I could go to 500, if
the promising zone says Go to 500.
I could have a flexible sample size.
Do it right, simulate it.
Explore the different approaches.
Don't just write in the promising zone.
It's, you know, in the siren song, they
run into the rocks and it's not good.
Compare it.
If you find a case where the promising
zone does better, let me know.
That's really cool.
But.
We have yet to find that one.
Make the decision when you need to.
Don't make it earlier than that,
and be efficient with your use
of alpha here in the interim.
We take that seriously and we
try to use our alpha efficiently.
We also try to drive and get
there in the shortest amount
of time, and I'm heading there.
In a couple weeks, we'll be making that
trip, um, in there and I plan to have
my app open, telling me, me, which way
to go and not making those decisions
before I get to that point in the road.
So I thank you all again for
joining me here in the interim
and till our next interim, cheers.
Listen to In the Interim... using one of many popular podcasting apps or directories.