· 41:25
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: All right.
Welcome everybody back to in the interim.
A podcast on innovative clinical
trial designs, Bayesian trials,
design science of clinical trials.
I'm your host, Scott Berry,
and I have two guests today.
Two wonderful guests.
Uh, and I'll introduce the topic.
Uh, first I'll introduce
our guests, uh, Dr.
Anna McLaughlin, who is a direct.
And senior statistical scientists
here at Berry Consultants and Dr.
Michelle Dery, who is a senior
statistical scientist and
director at Berry Consultants.
And they are both directors of
implementation and execution
of innovative trial designs.
Uh, we have here at Berry Consultants,
we have a group that specializes.
In the implementation of
innovative trial design, so
first, welcome to in the interim.
Anna McGlothlin: Thanks for having us.
Michelle Detry: Yes.
Thank you.
Scott Berry: And especially the
name in the interim that, uh, in
the interim of our trial designs is,
is, uh, rather apropos as you guys
spend your time in the interim, you
live in the interim, uh, in this.
So it, it would be great if you describe,
uh, to our listeners what, what is
implementation of adaptive trials, Anna?
Anna McGlothlin: So what what we do
is when a trial has been designed that
has an adaptive, um, feature in it and
has an interim analysis that's going to
look at data partway through the trial.
We are the group that receives that
interim data, that performs the
pre-specified analyses that are defined
in the protocol and the statistical
analysis plan, and then communicates to
either, um, A-D-S-M-B, for example, or
whoever's going to operationalize that.
A decision, um, whether any specific
triggers have been met, such as
early stopping, um, maybe it's met a
futility role, maybe there's an, an
update to randomization probabilities.
So we're the group that is, um, cranking
the models underneath the hood and
checking those against the pre-specified
design and making sure that, um, the
design does what it was intended to do.
Scott Berry: And there was
a time at Berry consults.
Berry consults were, were
approaching our 25th anniversary.
Uh, so we've been around, uh,
designing, uh, uh, innovative trials
and, and, and I should back up.
So, uh, Anna, how long have
you been at Berry Consultants
Anna McGlothlin: I've been here 12 years.
Scott Berry: and Michelle?
I.
Michelle Detry: I just hit 14.
Scott Berry: 14.
So we've been at this a while.
Um, and, uh, both of you worked
for a while in the design.
So we are designing, uh, innovative
trials, adaptive trials, and we'll get
into what does some of those look like.
But, um, multiple interim
analysis, adaptive aspects to them.
And we didn't at Bury consultants.
Worry much about the implementation
or the running of the, the adaptive
trials, but it became necessary for
us, uh, to, to get involved in the
running of the, the interim analysis.
Um, and Anna and I were involved in one
particular project where we designed an
adaptive trial that was supposed to do an
interim analysis when a hundred patients
reached, were enrolled in the trial.
And the company came back to us
when a hundred were enrolled and
they had paused for nine months
waiting for it to reach a threshold.
And they came back to us and Anna
and I looked at each other like,
oh no, they, they, they, uh, in
some large part, uh, a disastrous
outcome for them within the setting.
Was exemplified why we had to get
involved in running trials if we were
going to design innovative trials.
Running them is critically important
to the success of these trials.
Anna McGlothlin: Yeah, and I, I think
what happened there, and what often
happens is the people we work with
on the design, there's often a lot
of turnover and sometimes the people
that that we worked with who knew the
design, knew what was supposed to happen.
If they move on, sometimes we're
the only people who still know
what the design was intended to do.
So I think that's where the
ball got dropped in that case.
And, um, something that we are
now aware of and look out for.
Scott Berry: Yeah.
Yeah.
Michelle Detry: And I think also
there's this perception that
when an interim is triggered
that the enrollment should pause.
And I think that's something like
through design in our, you know,
our experience with adaptive trials
is we don't want enrollment or the
randomization to pause in a trial.
You know, in clinical
trials it's hard enough to.
Get enrollment started and generate
momentum, and the sites are on
board and everyone gets going.
And if you have a pause,
you lose that momentum.
And often there's this perception that
you're supposed to pause when you hit it.
And so I think one of the other things
we do is a lot of consulting on how
to do this while not pausing and to
be efficient and quick in doing the
interim analysis because enrollment
is continuing in the background.
Scott Berry: So it, the, the FDA guidance
on adaptive trial spends about half
of its time on operational bias and
really the running of it and who knows
what when during the course of it.
So a huge part of that is that.
By also pausing enrollment sends external
signals that an adaptive, you know,
something's happening within that.
And largely we would like the adaptive
part of the trial to be invisible
to sites that these analyses might
be happening in the background.
And we might do 20 adaptive analyses
during the course of a trial.
And sites don't know the difference in
the course of the trial until a course.
Something happens to patient
enrollment or the, or the stopping
of the trial in the course of that.
So, um.
Let's, let's talk about
the process of this.
Maybe an, a interesting thing, Michelle,
is, uh, what is different and what
is extra that's needed in an adaptive
trial for the implementation that
might not be there in a fixed trial
For people out there that have, are
used to fixed trials, what's different?
Michelle Detry: Yeah, I think a lot of
things that are different are things
that you'd normally would do at the end.
When the trial's completed and you're
locking the data and bringing it in
and doing queries and looking for
completeness, some of that gets moved
earlier in the trial because when you
do an interim analysis, you want to
have, um, good quality data on which
you know which to make your decisions.
But it doesn't mean that it's locked data,
so it's not the same processes of locking
the data, but you wanna be able to have
good quality data because you wanna make
the correct decisions because like Anna
described earlier, you're gonna make.
Decisions as far as maybe stopping
enrollment, changing randomization
probabilities, enriching,
enriching to a population, and
you wanna make sure that was off.
That decision was made
off of good quality data.
So often what we find is when we
work with people is processes.
They may have the end, we have to move
some of that earlier, but we do this in a
way to try to make it as, um, as efficient
as possible, not to add extra burden.
So, you know, things
that we do are identify.
A key subset of variables that are needed
at the interim analysis that the, um, the
sites can focus on, making sure they're,
um, up to speed on, in, on entering.
And it's complete, uh, that the
monitoring is there to make sure sites,
you know, to help the sites know that
they need to be, uh, up to date on
these, that queries are being generated
and completed for these key variables.
But it's not the totality of the database
because that's a, a large amount of data.
So I think one of the big differences
and one of the misconceptions is.
Some of the stuff that happens
at the end now needs to happen
earlier, but you can do it in a way
that it's not overly burdensome.
And so we kind of help with that
through advising and consulting and
understanding the processes, um, that
we can help the groups, uh, be able
to implement and be ready to do this.
So it's not just running
a statistical model.
Anna and I spend a lot of time on
the preparation as far as how this
is going to work and trying to
help groups so that, um, and how
that fits in with their processes.
Scott Berry: So your colleague,
Mark Fitzgerald talks quite a bit.
That, that, that doing that earlier
is sometimes therapeutic in the trial
that you discover uncertainties,
weird things that you wouldn't have
discovered till the trial was over.
But because of this process, you
discovered earlier in times to intervene.
Michelle Detry: Yeah, absolutely.
I think, um.
You can identify by doing this, you, you
could kind of a leg up on identifying
where there may have been problems, maybe
there was a misunderstanding with sites
on how things were entered in, into the
database or how things are to be measured.
And by doing these interim analyses
and having the focus on the
variables, using interim analyses
were often the primary outcomes.
So your key variables in the trial, you
can sometimes catch issues that when
you got to the end of the trial and lock
the data may be hard to recover from.
And so, um.
I think it is definitely a good
thing is it, it, it gives, it
gives the, um, the trial a better
benefit to identify problems earlier
before it gets too far along.
Scott Berry: Yeah.
Yeah.
Okay.
So, uh, sort of take.
Take the listeners and me, I,
I, I don't do implementation.
So you help, help.
Take me through the process of this,
uh, uh, Scott or a statistician
at Barry Consultants designs.
Uh, a, a trial 10 adaptive analysis
might be response, adaptive
randomization in a phase two,
three seamless trial kind of thing.
And, um, you're, you're getting
involved in the, the, the.
The implementation of this, sometimes
you actually form a named group, a
statistical analysis committee, uh, an
independent statistics group, something.
Um, what is the process of this?
Are you always chartered?
Do you create a charter to this?
Are you separate from the DSMB?
Sort of take us through the process before
you receive any data of what happens.
Anna?
Anna McGlothlin: Sure, and I might
ask Michelle to jump in here 'cause
this is her specialty, but we're not
always formally a chartered group.
It depends a lot on on the
trial and what the needs of
the trial are, but the purpose.
Of forming this group is to identify
who's going to know the data, who
knows what, when, so the naming of
the individuals who are going to
have access to that unblinded data.
In the trial.
Um, and we at Berry make sure that, um,
we set up a, a firewall so that those
of us who are on that unblinded side
looking at the real trial data, um, are
not talking to anybody who is maybe on
the design side, could still influence.
How the design is is working.
So we have that very clear separation
between the groups, and that's one
reason why we might wanna put a formal
charter in place is to just lay out
who those named individuals are.
Um, set up.
Uh, processes for how we communicate with
each other, how we communicate with a
DSMB how we communicate with the sponsor.
Um, in some cases that information
may already be set out.
It may already be in the DSMB
charter, um, or there may be
a separate communication plan.
So in those cases, we may not
need to create a formal charter.
But, um, if it's not somewhere else,
then this is a good place to, to
put all of that down in, in writing
before we receive unblinded data.
Scott Berry: Yeah.
Uh, and so on, on these trials, um,
your, your typically, your interaction
point, uh, we will talk about the point
of, of where you get the data, right?
Where we haven't sort
of gotten to that point.
Uh, but your interaction is
typically with the DSMB, Michelle.
Michelle Detry: It, it often is, um, once
we are unblinded and perform the interim
analyses, you know, we have extensive,
prior to unb, blinding in the preparation
phase, we have extensive collaborations.
Um, with the, uh, with the trial
team in a, the trial team, maybe
the statisticians, it may be
operations, things like that.
But once we're unblinded and actually
conducting the interim analysis,
it's most often with the DSMB.
Um, the DSMB is the one that is
charged with communicating information
back to a designated contact.
So they're a very natural group
to be able to communicate what
the interim analysis says to do.
Um, it's not so much that
they're deciding what to do.
They're checking to make sure that the
interim analysis was able to be conducted
as planned according to the protocol,
and then communicating what the protocol
said to do back to their contact.
So, um, we work a lot with the SMBs.
We work a lot on defining
this communication channel,
like Anna was saying.
Um, and there are some situations
where the type of adaptations.
Do not require it to go through A-D-S-M-B.
So that may have a designated, um,
sponsor contact that's separate
from the trial team and separate
from day-to-day trial activities.
You know, getting back to that, um,
operational bias concern that you
were talking about, where we're trying
to mitigate any sort of information
being, um, shared with the individuals
to which it should not be shared.
Um, so there's different ways to do
it, and that depends on, you know, if
we have something like that, we would
probably have a separate charter.
But otherwise, our role is often
included in the DSMB charter.
Just to have one document where
everything's in one place.
'cause we're talking with the DSMB
and it's a great place to have that.
Scott Berry: Yeah.
Okay.
So in the, in the trial, we, you have
a data safety monitoring board who,
uh, are chartered to, to, to worry
about patient safety in the trial.
Uh, the, the scientific credibility of
the trial, it, they're, we won't get into
the issue, but they're, they're unblinded.
Uh, we won't get into issues where
they're not unblinded, which none of
us agree with, but, uh, circumstance,
they're seeing the data that that
happens in every clinical trial.
Well.
In almost all clinical trials, not
every trial, but most clinical trials
have a data safety monitoring board.
There are some that are exempt from
it, low risk in that, but largely
the trials we're talking about have
a data safety monitoring board.
Um, and that's typical of all trials.
But now there's this additional, we'll
call it a statistical analysis committee.
It tends to be statisticians.
It tends to be the group that is going
to implement the pre-specified design.
Uh, within it, and there may be
communications, they're unblinded.
The statistical analysis committee
certainly unblinded to carry out the
pre-specified analysis in the design.
So these are two groups.
Um, the DSMB will be there
in most clinical trials.
The statistical analysis committee is
somewhat, uh, specific to innovative
adaptive trials and the need for that.
Okay, so now what is the process
before you receive the, the
data for the first interim?
So you're planning out the first interim
may be triggered by a patient enrollment,
patient exposure, some point in the trial.
It's predefined when that happens.
So what are the steps that are
you're doing before the time
somebody says, okay, here's the data.
What, what are you doing in that time?
Anna McGlothlin: Yeah, I can
start and Michelle, jump in.
So I think our first step is just
making sure we understand the design.
So usually within adaptive design,
there's a writeup at Barry.
We usually call this the adaptive
design report that just lays
out what the design is, when the
interims happen, what triggers them.
What the model is that's going to
analyze the data, what the population
is, and all of the different decision
rules that are possible at the interim.
So the first step is just making
sure we understand all of that,
that we, we know when the interims
happen, we understand the model.
Um, we'll go through the statistical
analysis plan, make sure that everything
is very clearly written out, and
this is our chance to ask questions.
So if we are going through and we find
that there's a place of ambiguity about
what to do in a particular situation,
or maybe what the model is prior,
didn't get written down or something,
this is our chance to, to find that,
to ask about it, get it written down.
Um, and then
once we're.
Michelle Detry: And just to, to
add on that, I think also one of
the things that we often find and
you know, have interested in your
additional thoughts on this is we're
approaching this from the perspective
of implementing a pre-specified plan.
And it's often a different perspective.
From maybe when was
designing it, the trial.
And so I think just wanted to cue in on
that point you made about, you know, maybe
a prior, uh, priors are usually there, but
maybe there may be some piece which is not
fully specified or it's not clear to us.
And so I think that's a big thing that
we spend a good chunk of time on is
that we do have the pre-specified plan.
So I don't know if you wanna give any
examples of that because there have
been, um, some trials, you know, we
can't always talk about all the details,
but where you've had some great.
Insights into details that need
to be added that weren't necessary
for the simulating the design
phase, but were crucial for
implementing it, implementing it.
Scott Berry: and maybe, maybe I'll
add to the question, what are things.
That maybe the pre-specified
plan aren't detailed enough?
Is that missing data?
Is it, uh, you know, you know, what are
the things that you really worry about?
'cause all of a sudden, when
you're unblinded now, it's, oh,
it's unclear what to do here.
I.
Anna McGlothlin: Yeah,
missing data is a big one.
Um, so that's always on
my radar to look for.
And a lot of times, uh, if we have
a model that's using Covariate,
for example, um, you know, we make,
we may have been good and caught.
Uh, okay, here's how we're gonna
handle missing outcome data.
But if you're missing a covariate,
we also need to know how we
should handle that in the model.
So that's a big one.
Um, the population that we're using
is, is another one that I always
ask a lot of questions about.
Um, so especially if the analysis
population is not strictly ITT.
If it's maybe some modified ITT,
so maybe it's patients who are
randomized and also treated.
Um, and then I, I usually have
questions about what that means
for the maximum sample size.
So there may be a trial that has, um, a
pre-specified futility rule, for example,
that's looking at what's the predictive
probability that the trial's gonna be
successful at the maximum sample size.
And if that's a small probability,
the trial would stop, but.
If it, and we would need to understand
when it says maximum sample size,
is that number randomized or is
that number randomized and treated?
And then if so, how do we estimate
that when we're at an interim where
we don't necessarily know how many
patients are gonna be treated,
um, out of those randomized?
So those are the kinds of questions
that, um, you know, we've learned
over the years of doing these
that, that we need to look out for.
I dunno, Michelle, any, anything to
Scott Berry: well, e even even
missing data is weird at an interim.
So you might have a plan that says
what to do with missing data at the
end, but sometimes you're missing
data as the patient's actually gone
in some way they may be missing, or
it might just be time delayed and
is a different kind of missing data.
Anna McGlothlin: Right.
And I think we had an example one time
of a trial that had specified that.
Um, at the, at the final analysis,
any patient who was missing
their data would be imputed as a
failure or something like that.
Um, but you know, we needed to be clear
that at the interim, just because a
patient was missing their data, didn't
mean they should be considered a failure.
It might just be that they
hadn't got to that visit yet.
So again, just making sure all
those details are really clear
for whoever's gonna be running it.
Michelle Detry: Yeah.
And I think the other thing with
that is you were talking about
the, um, incomplete versus missing.
Well, often as statisticians when we
talk about missing data, we mean that the
data's just not available and we often
have to deal with understanding incomplete
data where it still may be coming.
And so understanding the impact of that.
And so one of the other things
that we spend a lot of time on is
understanding the different variables
and how they're collected and how.
How quickly they may be
available in the database.
Um, you know, sometimes things
have to have scans and that takes
times for scans to be completed.
Or if they're lab values and they're
run in batches or there's adjudication
being done and adjudication
committee has to meet and review it.
So Anna and I deal a lot with.
Thinking about how these things
impact data availability and how
that impacts the pre-specified
design with when an is supposed to be
triggered and what may be available.
So we may go back to the design team with
a lot of questions asking for details,
how to handle this refinement, um, as
well as working with the operations
team to understand availability of
data and what kind of time it takes.
Scott Berry: Okay.
Okay, so now you're, you've, you, you,
you're trying to consume the design,
the, the statistics of the design.
You're clarifying and you're anticipating
weird things that can happen.
And maybe we'll talk a little bit
about weird things that happen, uh,
for you, but you're, so now you're
getting to the point where you
feel comfortable with the design.
Now, are you interacting with.
How you're going to receive data,
statistical models, what's the next
set of work that happens, Michelle?
Michelle Detry: So I think, uh,
the next step, well, there's things
that go along in parallel flow.
It's not definitely a, a clear set of
steps in an order, but I think one of
the next steps is the data dictionary.
Where I kind of alluded to this earlier,
where we identify, you know, Anna just
said we're understanding the design,
we're understanding which data elements
are needed to run this pre-specified
analysis, and then we work on writing
up a data dictionary of what is needed.
And that's very much a collaborative
iterator of process, um, with the, the
trial team, with the client to, um,
understand how that matches with how
things are being collected from the ECRF.
And so, um, writing out then this
dictionary of, okay, these are the
things that we need to conduct the
interim, and these are only the things
that we need to conduct the interim.
So it does two things.
It, um, identifies a nice subset of
data that can hopefully be, um, pulled
quicker and an analysis data set
for the interim be created quicker.
And then it also identifies those
key variables that we discussed are
needed for monitoring and cleaning.
Not just cleaning for data entry
errors, but um, for completeness.
So sites can be trained, monitors can be
watching this, making sure the sites are
up to date on getting these things in.
And then also cleaning can be done
in preparation of an inner analysis.
And then the data dictionary often
requires a lot of interesting
interactions because Anna talked
a little bit about this, but, uh.
We have to define this thing
called opportunity to complete.
We have to understand if the
participants in the trial could have
had the opportunity to complete the
visit where the outcome was recorded
and knowing if we should expect it.
So when we get the data, if there's a
data element that's not there, we need
to understand is it not there because.
They didn't yet reach that time point.
So it's okay for it not to be
there if it's missing because maybe
they dropped out and they, um,
uh, ended their participation in
the study or if it's incomplete,
um, and it should have been there.
And a lot of this ties into them
what the design says as far as
how an imputation method may be
used or how it may be handled.
Anna alluded to that as some things,
um, missing, maybe treated as failures,
but there may be different imputation
considerations that may be used.
Based upon what's happening in the data.
So we spend a lot of time on,
defining what we need and then, um,
uh, so that when the data is pres,
uh, provided to us, it's very clear.
Um.
Scott Berry: Now, now, uh, you also
want in that process, uh, test runs,
uh, before the real, the real run to
iron out issues interpret, you know,
and then, and then you're making sure
that you have the data format as you
described this, uh, so that you can then
run the appropriate statistical models.
Uh, in this.
So let's now let, let's assume you've
done this test run and I'll come to,
uh, potent potential issues of that.
So now you're getting to the point where
the timing of the first interim's coming.
Now, what does that process look like?
We're gonna ask about how long this
takes, but from the time now, let's
assume, as Anna described, we're not.
Pausing enrollment, patients
are being randomized.
And on June 1st you receive a data set.
You know what that's gonna look like.
You know the format.
You receive this secure transfer,
you now receive this data set.
What happens now from that
point to, uh, moving forward?
Anna McGlothlin: So, as, as Michelle
has mentioned, we try to be really
efficient in turning our analyses
around recognizing that, uh,
patients are still being enrolled.
We wanna get to a decision
as quick as possible, so.
We'll have everything that we can
set up ahead of time, so we'll
have all of our programs written.
We always create a program that
writes a, a report up that has
summary tables and figures.
So all of that as much as possible
we do ahead of time so that once
we get that data, um, we can run it
through the model, create our report.
We always have, uh, a lot of checks
in place and verification processes,
so there's always somebody.
Um, that's, that's looking through,
checking that the model is correct,
checking that our summaries are correct.
Um, and then we always like
to have time to just think.
Um, and make sure the models make sense.
Make sure that we aren't missing
something, that something hasn't
gone completely off the rails.
Um, we often will compare the data
that we received at an interim to
the data we received at the previous
interim to check if something weird
happened, patients disappeared.
Um, we have fewer mortality events
this time than we did last time.
Things like that, that.
Raise a flag that says
something's not right.
Um, so we always wanna make sure that we
have time, not just to hit a button and
run an analysis and spit out a table,
but to think and make sure that the
results we're producing makes sense.
Scott Berry: Yeah.
So in in, in the world of ai, of
course you could imagine that thinking
this could be entirely automated.
Data goes in, models are run, uh, and
there's no human involvement in that.
But it's critical to have
experienced people who.
Make sure things look right, uh, and
you've found multiple issues, uh, whether
it's data, whether it's it's models.
So it's a critical aspect to this to
have experienced humans looking at it
and not AI driven interim analysis.
Anna McGlothlin: Yeah, for sure.
I think we've seen a lot of things
where, um, we think we're, we've got
everything ready to go and then, uh, as
we are sitting and thinking about it,
talking through it as a team, you know,
somebody will, will raise an issue that,
yeah, this can't be right and will.
We'll maybe have to go and ask a
question about the data or, uh, you
know, dig into the model, see if
there's some kind of convergence
issue or, or something going on.
So I think that's a really
critical aspect of this.
Um, and we're gonna have to
be the ones to explain this.
So once we complete our report,
um, and we send it to the DSMB.
They have a chance to review it and then
we'll, we'll have A-D-S-M-B meeting where
we walk through them, walk through the
report with them, um, and ask, uh, allow
them the opportunity to ask questions.
So we wanna make sure that we're
prepared for the questions that they
might ask if they see something strange,
that we know that we've done our due
diligence, um, and can stand behind the
results and the, the decisions that are
being recommended by the, the model.
Scott Berry: Yeah.
Yeah.
Um.
The, so how long does this take?
You receive the data, how long do
you generally give yourself for this?
And I, I know the world of this,
at the end of a trial, people talk
about three months to clean data
and, and, and run analyses, and
typically months at the end for a
typical adaptive analysis in a trial.
How long do you guys generally
have to do these, Michelle?
Michelle Detry: So our goal is five
business days for when we receive the data
to when we send the result to the DSMB.
Now, of course there's always, and
it depends on that, you know, there
may be situations where there are
many, many models that need to be run.
And I know we're not gonna
talk about platform trials
here, but sometimes there's.
More that needs to be done
in interim where the five
days is not, um, sufficient.
But that's our target when we start.
And the way that we meet that target
is by the preparation ahead of time.
All those things Anna mentioned of
getting all our programs in place so
that when we get the data, we, we jump
in, we're looking at it, we understand,
you know, all our programs are written
and we're really thinking about what's
coming in and what the, um, models are
saying to do and whether this is correct.
And if.
We map this out well ahead of
time, you know, things can be
turned around really quickly.
So if you think about that, it takes
us, you know, the five business days
to do our part, but we also work
ahead of time to make sure for when
the trigger is met to us getting
data that's as short as possible.
So there's not a, you know,
often, like Scott, you were
alluding to the three months.
Often that's where the time
comes into play, is that there
is this perception of needing to.
Uh, take the data and clean it
for months and, um, get it all
ready before it's sent to us.
And so that's where we also advise on
how to do that quicker or not necessarily
quicker, but start it earlier so that when
the so between trigger us getting the data
and us sending it to the DSMB can be as
quickly as possible and often our target
from when the trigger is hit to us getting
data is like two to three business days.
We've had trials where we get
data the next day because we've
been able to, you know, prepare
with the group ahead of time.
They know what they're extracting.
It's limited, the programs are
in place, but it's, you know,
often two to three business days.
But again, some of that too is flexible
depending upon the accrual of the design.
Um, if you have a very fast
accruing trial, you're gonna wanna
be more efficient and have these
timeframes as short as possible.
But if you have a really slow accruing
trial, like in a rare disease, you know,
just for feasibility purposes that may
be lengthened between trigger hitting
and us getting the data, um, just to,
uh, allow for feasibility purposes.
But again, we always target
for us the five business days.
But I'm always gonna put
that with an asterisk.
And it depends, 'cause it's really
gonna depend on the underlying model.
But that's our initial target.
We start with.
Scott Berry: Yeah.
Yeah.
Okay.
So now.
You receive the data, you run the model,
uh, you run the pre-specified model.
And now you are interacting
with the DSMB and maybe triggers
have been hit for futility.
Uh, Anna, you talked
about enrichment designs.
This might be that certain patients,
uh, are now excluded from future
randomization, so that could be done.
Randomization probabilities could
change, enrollment could stop,
uh, trial could go from phase two
to phase three in all of this.
So now you're running the models.
And it, your interaction is with the
DSMB and D SMBs in adaptive trials are
different than, than typical trials.
So, uh, how is that interaction,
how does that typically go?
Anna McGlothlin: Mm-hmm.
So one thing we always try to do is
before we get to the first interim
analysis, we'd like to have, uh,
a conversation with the DSMB.
Um, a lot of times they may already
have meetings scheduled that
they're looking at safety before
we ever get to the first interim.
So we like to have an opportunity
to come to one of those meetings.
Um, before we are unblinded, before, you
know, uh, we get to the interim and have
someone talk them through the design.
Have the sponsor or whoever was um,
involved in designing the trial, um,
just describe the design to them.
Um, make sure that they understand
what they're going to see when they
get to an interim, and make sure that
everyone's on the same page about the
role of the DSMB in an adaptive trial
because so much work has gone into.
Creating this adaptive design,
understanding how it works, understanding
that you know, the type one error
rate is controlled, that it has
the characteristics that you want
it to have in terms of power, or
you know how likely it is to stop.
So you wanna make sure that DSMB
understands that these rules are
pre-specified, that these are not just
guidelines, that these are expected.
That if one of these
decisions is reached, that um.
That that's what's going to be,
uh, followed and recommended.
Um, of course the DSMB has this very
important role of monitoring safety, and
they may have additional recommendations
that they need to pass along to the
sponsor, um, in their role, but to make
sure they, they understand the design and
their role in communicating the decision
from the adaptive design is a little
bit different than their typical role.
Scott Berry: Yeah.
So, uh, do you get, Michelle, do you
get circumstances, and we talk about
this a lot, that the DSM B'S role
at that point is not to redesign the
trial, as Anna described, the design is
there, the protocols there, is there.
I, is that a struggle in some of these
circumstances that the DSMB may not
like the design at that point, or,
or, or redesigns it at that point?
Michelle Detry: There are some
situations where, where that
has happened, and I guess like.
The, the way we try to address that
is exactly through what Anna said
about having this kickoff meeting
ahead of time, of making sure that
the DSMB understands the design well
in advance of the interim analysis.
Um, DSM bs when they, when they sign on
to be A-D-S-M-B member, they ultimately,
uh, you know, agree to the protocol.
And so that is a time where if they
disagree with the protocol, disagree
with the design, that they have a
time to, you know, express those
recommendations if there are some.
Or to consider whether, you know,
if they have like a, a fundamental
disagreement with the design, that's
a consideration of maybe they're
not the best to be on the DSMB.
Um, but that in advance of an
interim of having that in an open
session is the time to do that and
have everyone get on the same page.
And when I'm serving on
A-D-S-M-B, I always take that
time to ask all the questions to
make sure that scientifically.
I agree with the design, and again,
it doesn't mean that it's the exact,
if I have A-D-S-M-B member, that's
exactly the design I would've designed.
That's not the point.
It's whether I agree with the design,
answering the questions of what's
been put forth in front of me and my
role in, um, being A-D-S-M-B member.
So sometimes that does happen more often.
It's, um, just kind of a
misinterpretation of role.
And once we express to them the
things like Anna said about, you know.
The work that went in ahead of time,
the sponsor decided that this is
what they want the design to do.
You know, as far as maybe a futility
stopping rule, this is where they want
it to stop or they don't want it to stop.
Of course, the DSMB is always
gonna make any recommendations
based upon patient safety.
Um, you know, regardless of what
the designs has to do, they're going
to make any recommendation they
need to make to protect the safety
of the participants in the trial.
Um, but emphasizing to them, you know,
this is what the sponsor wanted to keep
going, or the sponsor wanted to be more
aggressive on stopping for futility.
And the sponsor understood the, you
know, the risks and the consequences
with that when they designed the
trial and, uh, signed on with it.
And so we can.
The nice thing also, tying back to
what Anna said about understanding the
design is when we're in close session
with them and let's say a trigger
is hit for an action, we can further
express to the DSMB of how this design
process worked and how, you know, the
sponsor, this is what they wanted to do.
'cause both Anna and I have done the
design and had that consulting experience
and, you know, can assure the DSMB that
this is what happened in the design phase.
And um, everyone's on board with
this is what the trial's supposed
to do because this is what
they specified in the protocol.
Scott Berry: Yeah.
Do you find yourself at that point,
um, you know, an adaptive design
might, uh, Anna talked about the
predictive probability of success
by the maximum sample size, um,
statistical quantities that drive it.
Do you find yourself.
Uh, spending a good bit of time
educating what goes into those numbers.
The role of that, uh, is there almost,
you know, they, to them it might be a
black box and how important that role is
to explain what it is they're looking at.
Michelle Detry: Yeah, we do.
Um, I think understanding those
quantities, especially predictive
probability, you know, some DSMB members
may not be as familiar with that quantity.
Um, if it's a predictive
probability used for futility
stopping, they may be more, um.
Familiar with something like Conditional
power, and we can draw the, you
know, the comparisons of how this
is, you know, how this compares to
conditional power, what this means.
Um, and again, that's the presenting
the report to them in the closed
session is very much the opportunity
of we want them to ask questions.
We want them to say,
okay, what does this mean?
Does this mean, um, we
went over it by a lot.
Is this a strong, uh, strong evidence of
doing the adaptation, a weaker evidence?
You know, how did these data
anomalies factor into it?
We're, we're prepared to answer all those
kind of questions, but yes, often there
may be, or it may be like the model may
be a more complex model at times where
we have to be able to describe how the
model worked and what in influenced
the model and influenced the result.
And there's often a lot
of questions at that.
Anna McGlothlin: Yeah, and I think we
always like to think about how we're
presenting results in our report.
Um, I think one of the things we, we.
Pride ourselves in is creating a
report that can be consumed, um,
even, you know, by someone who's not
a statistician who maybe doesn't have
familiarity with adaptive designs,
but we spend a lot of time thinking
about how can we, uh, show the data
in a way that that helps convey, um.
The rules that are, that are being,
uh, evaluated, you know, how can
we visualize what that predictive
probability looks like and what it means?
Um, so we spend a lot of time trying to
think about, you know, not just is, are
we over or, or under the sum threshold,
but also can we help understand what that
means, like Michelle was talking about.
Scott Berry: yeah,
Michelle Detry: Yeah, and our reports
are not just tables, lots of figures.
We try to emphasize the figures.
And then we'll put interpret, we'll
write in words, interpretations in there,
um, just to help, like Anna said, they,
they can consume ahead of time during
their review instead of coming to the
meeting and having us need to explain it.
So we try to make them um,
very, um, self-contained.
Self-absorbed for when
the DSMB reviews them.
Scott Berry: Yeah, I, I mean, boy,
it, it, it sounds exciting, it sounds
stressful to receive this data.
Michelle, I know you're a big sports
fan, and less so as a big sports
fan, but you know, it's, it's that,
it's that data coming in that.
That sounds super exciting.
Uh, it's incredibly important work
to the, the success of trials, the
treatment of patients, and it is the,
uh, the exact name of this podcast.
You guys live in the interim, I.
And appreciate you coming on, and
as Michelle Michelle alluded to,
we will talk about implementation
of platform trials, which is other
very interesting things to it.
So very much, uh, uh, uh, enjoyed
having you on in the interim.
Anna McGlothlin: Thanks.
Michelle Detry: Thanks.
It was great talking to you.
Appreciate it.
Scott Berry: Thank you.
Listen to In the Interim... using one of many popular podcasting apps or directories.