· 37:36
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: All right.
Welcome everybody.
Welcome back to, in the Interim, we
investigate all things statistical,
scientific, um, um, medicine as we,
we may talk a little bit about today
in the world of clinical trial design,
innovative clinical trial design.
I'm your host, Scott Berry, and I
have a, a wonderful guest today,
a good friend of mine, and, uh.
And, and has been working with, with
and at Barry for a number of years.
Dr.
Roger Lewis, uh, sort of a doctor because
he's also, he's a PhD in biophysics.
He's also an md.
Uh, he's a professor at the the Geffen
School of Medicine at UCLA, a member
of the National Academy of Medicine,
and also a fellow of the a SA.
Uh, so Roger, welcome to in the Interim.
Roger Lewis: Great.
Well, it's a pleasure to be.
Scott Berry: Yeah.
So, uh, uh, uh, uh, an interesting topic
today and you're, you're, uh, incredibly
well experienced and, and passionate
about data safety monitoring boards.
Um, and you have a long history of
serving on data safety monitoring boards.
You've been a member of statistical
analysis committees that have presented
to them, uh uh, been all around the
world of data safety monitoring boards.
So we're gonna talk about them,
particularly the role in new trial
designs, innovative trial designs,
adaptive trials, platform trials.
Roger Lewis: Role of the data safety
Scott Berry: Board.
So it, it'd be
Roger Lewis: would be wonderful.
Scott Berry: uh, so my, my wife always
reminds me that a lot of people watch
this and listen to this, that may
not know all of the details of this.
So
Roger Lewis: let's, start
at the what is the role of a Data
Scott Berry: safety monitoring
board in a clinical trial?
Roger Lewis: So the Data Safety
Monitoring Board, which is sometimes
called a data monitoring committee,
has goes by a number of different
names, but they're all generally
groups of people that are assembled.
To keep an eye on things as a
trial is being conducted and to
protect the participants or the
volunteers who participate in
the trial from avoidable risk.
So there are, there's a step
before you start a trial where the
investigators, um, and all of their
collaborators and maybe patient
representatives think very clearly about.
How do we design a clinical trial?
So we learn and improve the care
of future patients, but at the
same time, how are we gonna protect
the patients within the trial?
But once the trial begins, those
investigators are generally blinded to
the accumulating results within the trial.
So they don't have insights
into what's going on.
Once the the process is actually started,
the data monitoring committee or the data
safety monitoring board fills that gap.
And keeps an eye on things when the
investigators are not supposed to be
looking to avoid biasing or otherwise
manipulating the trial results.
Scott Berry: So what, up the
data safety monitoring board?
Who, who, what, what, who might be
the members of a, a typical DSMB.
Roger Lewis: So a typical DSMB uh,
has, uh, members who are experts
in the clinical medicine or the
science of what is being done.
They may know a lot about
the type of therapy or the
usual treatment of patients.
There are usually statistical
or clinical research design
experts who know a lot about.
The interpretation of accumulating data.
'cause there are some specific
challenges to looking at data multiple
times as a trial is being conducted.
And then sometimes, depending on
the context, there may be patient
representatives or specialists in the
ethical considerations and in trials
that that span, um, the enrollment of
or include the enrollment of patients
across multiple geographic areas.
Or, or settings in which the patients
are particularly vulnerable, say
they're incapacitated by their illness
and can't consent on their own.
There may be additional people
brought in to be, to bring in the
perspectives of those locations.
Scott Berry: Oh, interesting.
So I, uh, we,
Roger Lewis: We.
Scott Berry: back to waiver of informed
consent and if it's DSMB different
in a, in a situation like that.
So during the trial, largely the
Data Safety Monitoring Board are
those that are seeing the data.
Uh, and generally people may not
understand that the investigators,
the outside world don't see
the data, but the data Safety
Monitoring Board is watching it.
didn't say anything about the role
of the Data Safety Monitoring Board,
or your view on the role of the Data
Safety Monitoring Board about the
scientific credibility of the trial.
So as the trial's going, presumably
we're running this trial to answer.
question, hopefully
more than one question.
What do
Roger Lewis: What?
Scott Berry: the role of the DSMB
is in making sure that that the
credibility of the trial, that it's
answering the scientific question.
Roger Lewis: Yeah,
that's a great question.
So the, when I'm usually
asked about the role of DSMB.
I talk about three different
levels of responsibility.
So the first is to the
participants in the trial.
So it's to protect those participants
from avoidable risk, including risks
that could not have been foreseen
at the time the trial was designed.
The second is to provide an assurance that
the scientific integrity, the validity,
um, the credibility of the trial is to
maintained to the extent that is possible.
While protecting the
participants in the trial.
So for example, in the setting of
an adaptive trial, which I hope
we'll talk about, there are very
specific rules that are in place.
That must be followed if the design
is in, is going to have the operating
characteristics, the protection from error
rates and bias that we want it to have.
The DSMB is one of their responsibilities
is to make sure that that trial
design is followed, as long as that's
still consistent with protecting
the participants from risk.
The third, uh, uh.
Responsibility of the DSMB and
the hierarchy, in my view, is to
operationalize the sponsor or the
investigator's goals with respect
to the, the, the trial itself.
So there are decisions that
sometimes have to be made as
the trial is being conducted.
So, for example, stopping
a trial for futility.
That should take into account all of
the different goals, uh, scientifically,
um, and otherwise, with respect to
the trial and balance those that can
only happen if the DSMB understands
in some depth what those goals were
Scott Berry: Yeah,
Roger Lewis: you,
Scott Berry: on adaptive trials,
and I want to talk about the
difference of A-D-S-M-B in an
adaptive trial and maybe set up that.
If you are running a trial and the trial
design is roll a hun, enroll a hundred
patients, and then we'll look at the data
and, and, and do an analysis, the DSMB
can follow the data, but you know, they
would have to, they would have to stop
it, otherwise it runs out to the end.
But otherwise.
The role of them is monitoring
data largely and, and looking
for, for safety signals.
Roger Lewis: In
an adaptive trial
may be
set up.
Now there's multiple.
Scott Berry: analyses, it has
adaptive triggers that can happen.
Patient populations could be stopped.
Uh, randomization could be
changed in adaptive trial.
So how
Roger Lewis: How is the role of the
Scott Berry: now changed?
Roger Lewis: Yeah, I
think it's a great point.
So in, in the traditional trial, you
designed, say with a fixed sample size.
The DSMB is largely looking for safety
signals or maybe operational or logistical
challenges that weren't foreseen, so
problems that weren't, uh, weren't
anticipated in an adaptive trial.
With its various moving parts, the,
the role of the DSMB expands to
making sure that the adaptive trial
is conducted as it was in intended.
As long as that continues to be cons,
uh, ethically and scientifically
appropriate, and most importantly
to understand the difference.
So it's, it's one thing to understand
how a traditional fixed sample
size trial is supposed to be
conducted, and it's qualitatively
more complicated to understand how
a modern, adaptive, or say, platform
trial is intended to be conducted.
Scott Berry: So now, um, and, and I, I'm
typically involved on the design side.
I've, I've, I have been on dsbs and
I, I've, I've been a part of them,
but I'm typically on a design side.
And what we're
Roger Lewis: We're trying.
Scott Berry: is a really efficient
design that's going to answer
the questions efficiently.
It may have stopping rules.
We define detailed stopping rules,
algorithms, models, and something
that we've simulated a great deal.
The design, the characteristics
of it, you could
Roger Lewis: You could
almost say, now this could be
Scott Berry: automated.
It within the setting and the,
the, so the question is, the role
of A-D-S-M-B there, and, and by no
Roger Lewis: no means.
Scott Berry: am, am I saying that this
should be run without A-D-S-M-B, but
the role of the humans watching this
automated machine go seems entirely
Roger Lewis: Different.
Scott Berry: maybe A-D-S-M-B
of 15, 20 years ago.
That was kind of a fixed trial design.
Roger Lewis: So I think it's,
it's, the role has expanded.
I'm not sure it's different in its
intent, so, so let's take the, the.
The type of situation that that
you and I are commonly involved
in, it's adaptive design.
It has planned interim analyses.
There are decision rules based on
statistical triggers, so the DSMB should
be paying attention to whether the design
is being implemented as it was intended.
The DSMB should be looking at the
characteristics of the incoming data.
That the design is by design blind to.
So most of the adaptive trials that
that, uh, that we see we work on,
we see designed by others, um, are
generally have the adaptations driven
by the primary endpoint of the trial.
Now, occasionally we have some
that it combine, um, efficacy and
safety endpoints, for example,
in a utility function, but there
is a restricted set of data.
That the adaptive design is responding to.
So one of the things that humans should
do is pay attention to the other parts
of the data stream that the algorithm may
be blind to and make sure that everything
still makes sense and is appropriate.
Secondly, um, there are settings in
which the, what happens in real life
unfortunately falls outside the,
the range of what was simulated.
So, for example, we may simulate a design
that, uh, under certain assumptions
regarding the accrual rate and just remind
everybody the operating characteristics
of an adaptive design that uses par,
uh, incomplete data from patients is
that the operating characteristics
actually depend on the accrual rate.
So let's suppose we have a
trial in which the accrual rate
is much faster than expected.
The DSMB may want to verify that
the operating characteristics
are still still acceptable.
So that's, that's an example.
So I think there's a general class
of situations, which is, uh, those
unanticipated scenarios in which
the operating characteristics of the
design may not actually be as well
understood as the regions of, of what
might occur that we did simulate well.
Scott Berry: So
Roger Lewis: So I.
Scott Berry: being in that
situation and, and, and sitting
on A-D-S-M-B and wondering, um.
The, the algorithm is set up and it might
do a predictive probability of success.
Should we stop enrollment?
What's the predictive probability
of success or algorithms
running you as the DSMB?
I assume you have to
know a little bit about.
How that's functioning, what
that probability is, what it
incorporates and what it doesn't.
So you described that you need to look at
the data that the algorithm can't see, the
design didn't incorporate those things.
It sounds like there's
a whole new skillset.
To understanding the design, what's
involved, what's not, a whole bunch
of time spent with the designers
before you go under the, under
the, the, the unblinded part.
And you're now by yourselves and you
can't really ask many of these questions.
Uh, and that's hard.
So a whole new skill set to sit on
A-D-S-M-B of more complicated trials.
Roger Lewis: I, I think that's right, but
I think it's both skillset and process.
So this will do them in that order.
The skillset is you need to have people
on your DSMB who absolutely understand
how the statistics were supposed to
work and how they will work if some of
the assumptions that we, that we tested
carefully turned out not to be true.
So, for example, as a general rule,
if enrollment is slower than expected,
the design's gonna work just fine.
Scott Berry: Yeah.
Roger Lewis: And so you don't
have to worry about that.
Whereas in the other direction,
you might or might not have to
worry depending on the details.
But there's also a process issue which
you, which you alluded to, which is
that before the DSMB sees any data
or certainly unblinded data, the
DSMB can have open conversations with
the people who designed the study.
The sponsor regulatory agencies and
understand, um, the considerations that
were taken into account and how the
design was developed, how it's supposed
to work, and how it was evaluated.
Once the DSMB has seen unblinded data, I.
To avoid various types of bias that
we may or may not exist very often,
but we certainly worry about a lot.
The DSMB needs to not discuss any of these
details with the, uh, investigator team.
Other than through the provision of
specific recommendations that are intended
to improve the safety or validity or,
or other characteristics of the trial.
So all the conversations about how
the design was supposed to work and
why it was designed that way need
to occur front, and it requires much
more preparation before the DSMB, as
you say, goes, goes un uh, into their
cone of silence if, if you will, and
I think this is an extension of the.
Uh, of the expertise that was
required with, say, a traditional
group, sequential design, dsbs with
traditional group sequential designs.
Uh, generally had a statistician
who understood that methodology.
An alpha spending approach, for example,
of an O'Brien Fleming stopping rule.
You needed someone who understood
the design, but now the designs
are much more complicated.
So you need people who not only
understand the general theory, but they
understand the specific application
of the design that you're tasked.
With overseeing
Scott Berry: Yeah, so
Roger Lewis: so.
Scott Berry: we've had, um, won't
mention any names or things, but I've
been involved in a design of a trial
where we have that initial meeting.
And we're, we're talking to DSMB
members who, quite frankly the
Roger Lewis: The
say it.
Scott Berry: like the design and they,
they almost wanna redesign before
they start, before it starts, oh, I
would do this and I would do that.
And we've had circumstances where.
has largely said, I don't think
you should be a member of the
DSMB if you don't like the design.
That the role of the DSMB
isn't to redesign the trial.
So there's some level of acceptance
of the design by being on A-D-S-M-B.
Is
Roger Lewis: Is that fair?
Oh, I think, I think
that's absolutely fair.
First of all, I don't think that as,
as someone who serves on A-D-S-M-B
member, I don't think I should serve on
A-D-S-M-B if I think it's a bad design.
'cause part of my responsibility
is ensuring that design is
conducted as it was intended.
For reasons of scientific validity.
On the other hand, um, none of us like
criticism and you know, we, we work for
weeks or months or many months on the
design of a trial, and the DSMB does
come into the, uh, evaluation of that
design with a fresh perspective and
Scott Berry: Mm.
Roger Lewis: there may actually
be some really good insights and
the sponsor's response should not
be to kick off the dissenters.
But, but to decide to make sure that
they aren't, uh, that they only have good
points that warrant a reevaluation of
some of the characteristics of the design.
And I think that, um, that brings
us to an interesting point, which
has to do with the, the, the
requirement for DSMB members.
To be independent financially and
scientifically from the sponsor
or the product, um, and hopefully
intellectually from folks who
are developing the, the therapy.
So for example, you don't want someone on
your DSMB who has spent their entire life.
Um, uh, developing compounds in Class
X and then put them on A-D-S-M-B of a
trial evaluating compound X because they
will, they will want to see it, it work.
Um, and it's just not saying
that people are dishonest.
It's saying that PE we
all have our own biases.
And, um, it's those, uh, covert biases
that I think are, are the most worrisome
because they're the hardest to account
for in, in what can be quite nuanced
discussions about risk, um, and benefit.
So the, the DSMB members are
generally people who are scientific
experts or medical experts in the
clinical area in general, research
methodology in related therapies.
But they need to have some level of
independence, um, from the actual, uh,
product development, which is why they
may look at a design and there may
be things they don't like, but it may
also because they have broader insights
into likely challenges with the patient
population, outcome assessment, safety
assessment, and those sorts of things.
Scott Berry: Yeah.
Yeah.
Um, and, and, and point very well taken
about, um, uh, there can be huge value.
Uh, brought from the DSMB to that, even
in the, the early design stages of that
uh, over the years, and we won't, we won't
say how many years you've been doing DSMB
work, but o over the years, uh, given the,
the, the, the changes, is there a bit of
a clash that people, uh, who have been
DSMB members in, uh, I'll say somewhat.
Traditional fixed trial designs
jump into an adaptive trial, and
it's, it's a different world.
It's a, it, it, it's a different type of
trial and a bit of clash with maybe the,
i, I won't even say, but the amount of
time spent in the review or the, the, the
expectations for A-D-S-M-B or the, the
Roger Lewis: The role that they play.
Scott Berry: what might be a a 20
interim analysis adaptive trial
that they're trying to follow.
Roger Lewis: So there's lots of growing
pains, moving people who have experience
with traditional approaches into, uh,
the role of oversight of these trials.
And I.
I am sure that when we're done with
this recording, I will regret not
having remembered some of them.
Um, so one has to do with
the level of preparation.
So it is an open secret that many
clinical members of d SMBs are
picked because there have, uh, very,
um, high stature in their fields.
They have a lot of credibility, um,
and they are very, very busy people.
They often show up to DSMB meetings
not having spent sufficient
time reviewing the reports.
And as a sub separate topic, these reports
have just grown in length and complexity.
Um, and, uh, we could spend an entire
podcast talking about how damaging that
is to the overall safety of, of the trial.
So these people are used
to showing up to meetings.
With, uh, insufficient preparation
and still be able to make important,
meaningful contributions based on their
knowledge of the clinical disease and
context and the fact that the trials
were quite simple and they could
pretty much figure it out on the fly.
That's no longer true.
And the amount of preparation
required before the meeting start,
before you see data and for each
meeting is substantially greater.
And people may or may not be,
uh, able to incorporate that into
their other demands and their
professional, professional existence.
There is a, a commonly observed phenomenon
that if you want the best possible
review of a grant or a manuscript,
the best person for that is someone
who's still an assistant professor.
Um, because they will spend the
time and they will care and they
wanna make a good impression.
And there's some of that
truth in DSMB work as well.
Um, that the best people to do
this are not people who, um, are so
well known that they can't devote
the time to what's necessary.
So that's one area of conflict.
A second area of conflict, which I
think we really need to touch on, is the
difference between a rule and a guideline.
So if you look at the, um, what
was written in the literature about
data safety monitoring boards,
for example, as, as originally
envisioned by NIH institutes.
A very common paragraph in those,
in those publications was that the
rules, uh, or the stopping rules
say, you know, group sequential
stopping rules were merely guidelines.
And it was up to the DSMB to decide
when they were, um, appropriate
to apply and when they weren't.
And in fact, there was an implication
that if the DSMB thought it wasn't
good to stop so soon, that that
was just fine and they could just
make that decision unilaterally.
Um.
That view, at least in my opinion, is
completely inconsistent with this idea
that we want adaptive trials including
group sequential trials that have,
um, defined operating characteristics.
If you want a defined operating
characteristic where you've simulated
the trial under a set of rules, they are
rules And just like, um, other rules,
Scott Berry: might say
the rule in the protocol.
It
Roger Lewis: uh.
Scott Berry: It might.
It's in the protocol.
Roger Lewis: Absolutely.
Absolutely.
So these brief specified rules are
rules, which doesn't mean they can't
be broken, but if they're broken, they
need to be broken for very explicit
reasons, not because a member has a
hunch or is curious what would happen
if the trial went a little bit longer
or or the member
disagrees with the design.
It needs to be for a reason.
Like there's a safety signal.
We didn't anticipate.
Um, and then, uh, the reasons for
deviating from the rule need to be, in
most cases, discussed with the sponsor.
Um, so, so one of the big disconnects
between DSMB work a couple of decades
ago and modern DSMB work adaptive trials,
is that adaptive trials have rules.
And they are not merely guidelines.
If we're going to simultaneously
argue that we understand the operating
characteristics of these trials,
Scott Berry: Yeah.
Yeah.
The
Roger Lewis: the.
Scott Berry: uh, within that now.
Uh, with the rules and the trial running.
I, the, my analogy is that
you, this is like a plane
that's flying automatic pilot.
It, it has, it has, uh, uh,
algorithms that fly the plane.
But you have a plane sit, you
have a pilot that's there, and the
DSMB is somewhat like the pilot.
Now, part of this is.
have to have the DSMB there
because they need to evaluate the
appropriateness of those rules.
I could imagine situation where
concerns that the data that's
going into the algorithm is flawed.
There's weird missing data to it.
There's concerns that the data's not
appropriate, and you might think, I
know it's a rule, but I'm concerned that
the data's not appropriate, and I don't
Roger Lewis: I don't,
Scott Berry: rule is
any longer appropriate.
Roger Lewis: that seems very different.
Scott Berry: I just don't like it.
Or it would be nice if the trial ran for
another year, or, or, or as a guideline.
That seems though really
hard for the DSMB.
To, to, to make those judgements,
but seems that that's the
key role now for them.
Roger Lewis: A Absolutely.
And, and let me give you just a couple
of, um, semi hypothetical examples
of those types of considerations.
So let's say you have a.
Um, a futility stopping rule, and yet
your first interim analysis at which
that futility rule can be applied.
Let's, and let's say the
disease in question is a
chronic degenerative disease.
The first patients who are enrolled
in the trial may be from the
reservoir of, of patients with
relatively longstanding disease who
are waiting for the trial to open up.
And therefore their disease may actually
be qualitatively different, more
difficult to to slow the progression
or to intervene on than patients with
newly diagnosed disease, in which case,
it may be that the futility rule is
appropriately interpreting the data,
but that the population we expect to
enroll later in the trial may actually
have a more favorable prognosis.
So we're getting the right
answer on the wrong population.
In which case the rule
might not be appropriate.
A very similar situation occurs in, in
global clinical trials, if there may be
particular geographic areas in which,
for example, a, a surgical procedure may
behave differently than in, in some areas.
Obviously, we'd like to have
the heterogeneity in the disease
process or in the surgical
procedure thought out ahead of time.
So the adaptive design accounts for it.
It, but the DSMB in those cases may
be suspicious that the rule is missing
an important, um, characteristic
of the data stream that need, that
really needs to be accounted for.
Scott Berry: Yeah, so there, there,
within these adaptive trials, there's also
somewhat of a new group and it's, uh, it
there, there always was a statistician
who was unblinded to the data.
That's presenting safety tables,
um, and these, these tombs of
safety tables that they go through.
And there was always that role,
but this individual was kind of a
master of the data and presenting it.
Now, in most of these trials, we
have statistical analysis committees
that are running, the models that
are driving, driving the adaptations.
The interaction between the DSMB and
the statistical analysis committee also
seems to be very much of a new thing.
Roger Lewis: Yeah, absolutely.
So, so as you well know, the, the
Statistical Analysis Committee
is an unblinded group, um,
who actually run the analysis.
So it, they receive data usually
from a data coordinating center.
The data they receive have
usually gone through the, um, the
ongoing quality assurance process.
And what I mean by that is that they
are cleanish, but they are not locked.
Um, and the statistical analysis
committee, then, um, it makes a good
faith effort to implement whatever the
pre-specified analysis were intended, uh,
necessary to drive the decision rules.
And I think it is a rule rather
than an exception that the
statistical analysis committee.
Learns things about where the data are
more or less consistent, where they're
more or less complete, where there may
be issues with internal inconsistency
in the data that, um, are important to
understand, to help, uh, assess the.
Um, the credibility or the level
of assurance we wanna place in the
results of those interim analysis.
One of the things the statistical
analysis committee has a lot of
insight into, just to give a concrete
example, is you're doing analysis.
Three, you can see which of
the patients in analysis three,
we're actually also in analysis.
Two.
But they've had an outcome, or god
forbid even a treatment assignment
changed since the prior, prior analysis.
And in big trials, those things happen.
But it, it helps us on the
statistical analysis committee side
understand the level of assurance
that we should place in the data.
And you can picture a setting
in your interaction with the
DSMB, especially if, if something
is near a statistical trigger.
That the DSMB would appropriately
ask questions about the quality of
the data that yielded that result.
Scott Berry: Yeah, and, and, and so then
that statistical analysis committees.
Trying to diagnose the appropriateness
of the analysis, the data that went
in, uh, and, and, and interacting
with the D-S-M-B-I imagine back
and forth a little bit about that
appropriateness when the time comes
for a, a decision or a trigger is met.
Roger Lewis: Yeah, I mean the, the pattern
that we see is that the Statistical
analysis committee statisticians.
Uh, singular or plural.
Uh, present their interim analysis report,
which as you noted is separate generally
from the safety reports and secondary
endpoints and all those other reports.
They present that, um, interim analysis
report to the DSMB, and then they
stick around to answer any questions.
Usually the DSMB charter allows for
the DSMB to then kick the statistical
analysis committee out of the meeting.
My experience is that usually the
statistical analysis committee has.
Very useful insights into the data,
and they rarely get kicked out.
Scott Berry: Yeah.
Yeah.
Yep.
You know, I, I
Roger Lewis: I, I
Scott Berry: to start this off with,
uh, my layup question for you, which I
think we want to get on the podcasts.
Roger Lewis: forgot.
Scott Berry: DSMB be blinded
to treatment assignment?
Roger Lewis: Okay.
Um, yeah, so the, so the DSMB is.
Inherently tasked with balancing
efficacy and safety at a very fundamental
level, and those considerations are
not, are virtually never symmetric.
So we will, for example, want to continue
a trial that has a good chance of showing
that the new treatment is helpful.
We rarely wanna prove with the
same level of certainty that we can
hurt people with a new treatment.
So given the lack of symmetry.
There's absolutely no value whatsoever
in blinding the DSMB to treatment
assignment at any point in the trial.
And I, just to be clear, my
position on this, which I, I
know, I know you anticipated, is
what I mean is that every table.
Figure comparison presented to the DSMB
should explicitly label the treatment
arms no A versus B, no shuffling of them.
They should be labeled with the
actual name of the treatments so
that the DSMB never is confused.
There are extraordinarily high profile
cases where dsbs thought they knew what
was going on when they were looking at
blinded data, and by blinded, in this
case, I don't mean aggregated, I mean.
Treat treatment assignment separated,
but labeled A versus B, um, with
sometimes with that randomized and
the DSMB made bad decisions 'cause
they assumed they knew what they were
was going on, but they were wrong.
The cardiac arrest, uh, excuse me,
cardiac arrhythmia suppression trial
is the most notable example of that.
One of my favorite publications on this
was an editorial from Curtis Maynard in
the New England Journal many years ago.
Which is entitled Masked Monitoring
and Clinical Trials, blind Stupidity.
Scott Berry: Yeah.
Roger Lewis: Um,
Scott Berry: Okay.
Roger Lewis: as New England Journal 1998.
He was right then.
He's still correct.
Scott Berry: so largely the DSMB
should have access to everything.
in, in this, this notion of risk
benefit, I maybe you could construct
weird cases where there, there there's
some efficacy endpoint that's separate,
but largely it's a question of is the
risk benefit profile for a patient
in the trial still, still good?
And they should have, essentially
have everything at their, that,
that, that they can possibly get
data to make those decisions.
Roger Lewis: I, I think
that's exactly right.
Now, I'll give you an example of
where people get confused about this.
So, picture a trial that's gonna take
a year to get the, to the first plan
to interim for efficacy, where the
first opportunity for early stopping
for efficacy is, but the charter
and, and appropriately, so the, the
committee meets in after six months.
So a common area of confusion is
at that first six month meeting
should the D-S-M-B-C efficacy data.
Um, and the answer is yes,
they should, just to be clear,
um, because they will even be
balancing efficacy and safety at
that six month, um, uh, meeting.
The confusion is that because
there's no opportunity to stop early
for efficacy within the design.
At that first meeting, people sometimes
argue it's a safety only meeting.
The reason this is, um, incorrect is
because the level of safety concern
that the DSMB should appropriately
tolerate depends on the benefit that
patients are receiving from the therapy
or rec or appear to be receiving.
You simply cannot, um, separate them.
Scott Berry: Yeah, I now
do adaptive designs almost
address this more appropriately
because the rules are laid out.
Maybe there's now five interim
analysis and it says very clearly
you cannot stop at this analysis.
Here are the rules here.
They're set up where maybe historically
there was a notion that when the DSMB
meets maybe we'll declare success.
And it was a little bit
more guideline driven.
And there may be thought to be
reasons not to show them efficacy,
maybe adaptive designs help that.
And of course, many of these
adaptive designs, you can't possibly
follow the trial unless you're
completely unblinded to treatment
assignment because the analyses are.
Roger Lewis: You know, I, I think
many of these things come up on
what I'll call soft considerations.
The safety end, the safety outcomes
that are not involved in the primary
efficacy analysis, um, the total
burden, um, those sorts of things.
So I'm not sure the adaptive
design helps those that much.
I think with respect to the, um, the
direct balance of the primary efficacy on
the primary endpoint versus say a pre, um.
Uh, a preconceived primary
safety consideration.
I think the fact that the adaptive
design in general allows you to, to
look in a pre-specified way earlier,
I think it does, it does help that.
But the fact is that each time the DSMB
meets, if patients have been treated,
um, within the trial, they should be
seeing efficacy and safety data so
that they can explicitly balance them.
Scott Berry: Yeah.
Yeah.
Wonderful,
Roger Lewis: Wonderful.
Scott Berry: Uh, and
Roger Lewis: So the name of this talk
Scott Berry: is in the interim, and we
have d SMBs that live their life in the
interim, uh, looking at the interim data.
Roger Lewis: by Roger.
Scott Berry: appreciate it.
That was fabulous.
Um, thank you for joining in the interim.
Roger Lewis: Great.
Thank you for having me.
Scott Berry: Thanks, Roger.
Listen to In the Interim... using one of many popular podcasting apps or directories.