· 43:43
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: All right.
Welcome everybody back to, in the
Interim, where we live, uh, in the
interim analysis of clinical trials,
the interim analysis of science.
And I'm joined today by, uh, uh, what has
become now your fourth or fifth, time.
On in the interim, Kurt Veley has
joined me and Melanie Cantana.
leads our consulting group here at
Berry Consultants, and I'm gonna
lose control of this podcast today.
And I'm gonna hand it over
to the be the host to Kurt.
And Kurt, you want to be the host
and introduce the topic for today.
Kert Viele: Yeah.
So, um, anyway, I like being in charge.
I like interrogating Scott.
Um.
The, uh, so the topic today is something
that we probably should have talked about
a while back given, uh, what we, uh, do
day to day, which is Bayesian statistics
and how it's relevant to clinical trials.
Uh, we want to talk about what
it is, we want to talk about
what it, when it's useful.
Uh, we want to talk about, uh,
standards of evidence and how it
interacts with adaptive trials.
We're gonna hit a lot of stuff.
So let me start with the absolute
hardest question on the list.
And for Scott, can you
define Bayesian statistics?
Scott Berry: Wow, wow.
Maybe I shouldn't have
let you be the host,
Kert Viele: I know.
Scott Berry: if you're gonna do
this, if you're gonna do this to me.
So I.
Uh, we can do this mathematically, and
I don't think people are interested
in mathematically where we a prior
in our parameters of interest.
We do create a model like
frequent to statistics, but
we have a model for the data.
Given those various parameters
and conditional on the data, we
update our uncertainty about the
unknown parameters in our model.
The beauty of that is the prior
can be a wide range of things.
it can be external data, it
can be prior opinions on that.
It can be hierarchical structure, but
it, and, and within the context, it's
based on our current knowledge of data.
our current posterior distribution,
what's our current uncertainty
about the parameters of interest?
Kert Viele: Alright, um, can you give, so
I'll, I'll turn this one over to Melanie.
We, we've often talked about this,
this comes up in a lot of regulatory
documents and various other things
as the use of informative priors.
Somehow I want to say, Hey,
um, I, I already know something
about what the parameter is like.
And we'll certainly get to that as
we go, but it's also a lot broader.
And Melanie, can you give us some
examples of places you would see
Bayesian analysis in a clinical trial?
In addition to just borrowing
historical or external data?
Melanie Quintana: Yeah.
I would say actually most of the
trials where I've done a Bayesian
Kert Viele: I.
Melanie Quintana: instance, as like
the primary analysis or analysis,
Bayesian analysis within the trial
itself, most of them actually have.
Non informative or
vaguely informative prior.
So I actually rarely do
it in the space of this.
I have this really informative
prior and I wanna use it.
I think it works great in the
situation of, uh, we have other
evidence that we wanna synthesize.
Within within the trial itself, we're
slightly adjacent to the trial and
we wanna use that evidence to sort of
come up with an overall, uh, synergy
of information that we can then use
to estimate our current parameter.
So.
I think there's lots of examples.
For instance, you know, borrowing
information across concurrent and
non concurrent controls in a platform
trial or something like that.
That's not using an informative
prior, that's borrowing information
across multiple sources.
Kert Viele: So with respect to
that, um, we've talked about the
modeling aspects of Bayesian.
Uh, we also use Bayesian
a ton in adaptive trials.
Um, Scott, could you talk about why
we might use it in adaptive trials?
Why, why is it a natural fit there?
Scott Berry: Yeah, they, as I try,
I tried poorly to describe Bayesian.
One of the nice aspects of
it is you, you start with a
distribution of the parameters.
You observe some data and that's updated
based on your current information.
And by the way, that could be, um, uh,
auxiliary markers, early clinical markers,
a 30 day outcome where 90 days is primary.
And now you, you get an updated of
the parameters of interest, and if you
collect a little bit more data, it's
updated again, and you get the same
answer if you update five times or if
you just did it all in one fell swoop.
it's this beautiful mechanism based on
the current level of data that captures
the appropriate amount of uncertainty.
So if I'm gonna make an adaptation
in a trial, going to select a dose
in a seamless trial, I'm gonna
make a decision on enrichment.
Do we want to drop a population or not?
Based on all the data I've collected in
the trial, what is my uncertainty about
these parameters, about the effect in
this population, about this dose, the
likelihood of success in the trial?
The Bayesian approach is such a
natural and beautiful way to do that.
Part of it is also the real awkwardness
of using frequent to statistics
in that paradigm where analyses
are based on the sample space.
And in adaptive trials, that gets really,
really complicated very, very quickly.
And in the Bayesian approach, it's
a very natural thing 'cause you only
have to analyze the data you've seen.
And not concerned about data
you haven't seen, which does
matter in a frequent analysis.
So it's a very, very natural way to
continually update the distribution and
then drive adaptive decisions, even if
at the end of the day trials analyze.
Using a P value, you can design better,
smarter trials by using Bayesian
throughout the adaptation process.
Kert Viele: So you, you talked
about, let me kinda harm out on,
on the last word that you'd said
there, which was a better trial.
Um, I've often heard.
This notion that if you're using
Bayesian it's and really adaptive
in general, that it's a shortcut.
The goal is to do something smaller, to
try to make decisions on less information.
If I use an informative
prior, I need less data.
In my current trial, I'm trying
to make the sample size smaller.
A lot of times it's talked about less
information that you're trying to, usually
this is referred to as lowering the bar.
Um, so, you know, are bayesians
lowering the bar or is there,
is there more to it than that?
Scott Berry: Yeah, it's
uh, uh, absolutely not.
Uh, that now in some circumstances there's
a Bayesian analysis that's walking in that
could have aspects of lowering the bar.
The it, depending on what that means,
it could be the success criteria
is easier than a traditional 5%.
Two-sided type 1 error.
It it, uh, in a scenario, it could end
up being a smaller sample sizes, but
many times by employing these methods,
we're actually collecting more data.
In this scenario, we're analyzing more,
uh, uh, outcomes combined together.
We're borrowing from a patient population.
In that scenario, we're, we may be
utilizing more data drive the answers.
If you're If you're, if you're running
a trial where you're estimating the
effect in pediatrics and you're borrowing
from adults, you're using more data to
provide a better answer of pediatrics
So there's many circumstances, both in
Bayesian and in Adaptive where we actually
go bigger and we collect more data.
We have bigger phase 2
parts in the phase 3 trials.
So I think it's actually in many ways
kind of the opposite of that, But
yet I think we're labeled as that.
Melanie Quintana: Yeah.
I mean, I think that's right.
I think we're.
In Bayesian settings, we're trying to
efficiently use all of the information
in front of us many times, and
because we're sometimes bringing in
more information that otherwise you
might have ignored In some senses,
we might be raising the bar, like
Scott talked about multiple endpoints.
We might have a model
where we're looking at.
What's the common treatment effect
across multiple endpoints now?
And actually then Now we might
have to raise the bar a little bit.
We might have to show that
each of these endpoints are all
significant to actually be positive.
So in some senses, certainly I think
we could possibly be raising the bar
because we're bringing in more data.
Scott Berry: So, so Melanie, you're
involved in the Heal a LS platform
trial, which you can go back to
a previous podcast with, uh, Amer
Kovich and, and go into that.
But you have a Bayesian analysis
at the end of that trial and.
Why Bayesian in that trial and it,
it combines sort of multiple things
together, hopefully for a better answer.
Melanie Quintana: Yeah.
Yeah.
So that's a case where.
We want to be able to, in a platform
trial share controls across multiple
treatments that are being tested.
So each treatment might have
its match controls and we're
sharing controls across them.
The Bayesian setting lets us
kind of borrow information across
these different sets of controls,
but not pool the information.
If the controls are looking very
different, we might share more
across and borrow more information If
they're, or sorry, less information.
If they're looking very similar,
we might borrow more information.
So the Bayesian model really
lets us get at that like dynamic
borrowing of the shared controls.
It's really this thought
that just because something.
Might be slightly different.
We don't need to throw away
the data because it might
be a little bit different.
We can model those differences
and build in and use all of
the data to get our answers.
So that's the real, one of the
reasons we're using a Bayesian model
in a, in the ALS platform trial.
And I think often why we use
Bayesian models in platform
trials and ALS in particular.
You know, there's this unique
situation actually in multiple
neurodegenerative diseases now.
We're seeing fatal
neurodegenerative diseases.
We're seeing these models that are joint
models of functional outcome and death
time to death, and so we wanna be able
to capture what's treatment effect.
Across both of these endpoints
get an integrated assessment of
how the treatment is affecting
both function and time to death.
And again, the Bayesian framework is well
suited to get these sort of integrated
assessments across multiple endpoints.
So that's another reason why we're using
the Bayesian model and why we might use it
even in a non-platform trial in ALS to do
this joint modeling of multiple outcomes.
Kert Viele: So related to
that, um, could you have been a
frequentist and accomplished this?
Same goals.
I'm, I'm, I'm going through my hits
of, uh, I'm going through my hits here.
Melanie Quintana: sure, sure.
Uh, no, certainly.
I mean, I think you could,
would it have been as.
Easy to accomplish those goals.
I think you could build to share the
controls across multiple, what we
call regimens in a platform trial.
You could build sort of a frequentist
meta model that might be able to do that.
I think it gets a bit more
difficult in the Bayesian setting.
It's just very natural to build
these hierarchical models that can.
Really effortlessly borrow
across multiple sources.
So yes, I think you could do it.
It would've been, at least
for me, a little bit less
natural to build that model.
The same goes, there's certainly
frequentist models, joint models that
model both function and time to death.
it can be a a little bit less natural,
at least for me to build those models.
To get them to do the exact
thing that we want them to do.
In particular, to have this integrated
assessment across all of the endpoints.
So there's small features I think
that are the Bayesian model might,
Bayesian framework might be a
little bit better suited for, or at
least a little bit more natural to
build the model in that framework.
But sure, you could do something similar
in a frequentist setting, I think.
Kert Viele: So I, I, I'm
being picky a little bit.
Yeah.
Melanie Quintana: I could,
someone could.
Kert Viele: I'm being picky.
But you know, part of the idea here is
that Bayesians are naturally trained
on this idea of collect data, evaluate,
um, collect more data, evaluate, collect
more data, evaluate, which is essentially
the adaptive trial formulation.
It's actually the drug
development formulation.
We just put them in separate trials.
So this is a very natural setting.
Um, I think there are situations where
there are no frequentist, uh, analogs,
um, notions like predictive probabilities.
So the idea of predicting the next
trial or just the completion of
the current trial, you, you have to
take into account the uncertainty
in the actual, you know, what is the
treatment effect and conditional power.
It's.
Close, but conditional power has to
make an assumption on what is the
treatment effect and project forward.
Whereas, you know, the Bayesian ideas,
all these things are unknown and they're
all being used, uh, you know, in a
synthesized, coherent way, which I think
is very, very natural in this setting.
Scott Berry: Yeah, I mean, in
some ways Bayesian is easier.
Kert Viele: Yeah.
Scott Berry: I, conditional power works
If you have enough data to estimate
the effect, well, but you have to be
smart enough to know what is that sample
size when we can use it sort of thing.
A predictive probability just works,
you know, based on the distribution.
You update it, you can calculate
what's the predictive probability
with the current sample size will
be successful with a larger sample
size, it incorporates uncertainty.
In the right way that you, you, you
can be really comfortable that that's
gonna give you the right answer.
where conditional probably sometimes
works, but sometimes it gives you
really bad answers because it, it
doesn't provide uncertainty at the
current place you're sitting at.
Kert Viele: At one point in the, well,
we'll call 'em the old, old days.
We took a Bayesian design and were
asked to make it frequentist, and
we went back through and converted.
All of the posterior probability
thresholds to P values and
essentially replicated the design.
But one of the fundamental issues is
we would've never found that design
or naturally came to it without
looking at it in a Bayesian framework.
So it was, we could translate it,
but we wouldn't naturally think of
it from the, from first principles.
Scott Berry: Yep.
Yep.
Kert Viele: Uh, speaking of regulators and
old times and new times, what are, what,
where is the regulatory picture for Bayes
Where's it been and where is it going?
Scott Berry: Yeah.
I, I, it, it's, where has it
been and where is it's going?
We're we're coming out of, uh, you,
Kert Viele: I.
Scott Berry: said the olden days.
Berry Consultants has around for 25 years.
I've Been doing this 25 years, Don has
been doing this much longer than that.
Oh, you know, 25 years ago it
was very rare to see Bayes
especially in an FDA trial.
in the scenario.
I think the first FDA approval at CDR was.
like 2008-2010 something like that Uh,
Very little mention in guidance documents.
No mention in ICH guidance documents.
Now we're at a situation where
CDRH has a guidance document out.
Uh, we have guidance documents out
from USFDA We're supposed to be getting
one from the EMA, I think within a
month, a guidance document on Bayes.
The draft guidance that just came out
on ICH E20 mentions Bayesian designs.
They're being used in confirmatory trials.
They're being used a great deal in
learning trials outside of the FDA.
They're being used in
comparative effectiveness.
They're being used in NIH trials.
It's being used.
a great deal.
Uh, being accepted by FDA for, uh, and,
and EMA and is part of the guidances
for adequate and well controlled trials.
Melanie Quintana: Yeah, and
Kert Viele: So.
Melanie Quintana: too, the whole
complex innovative design program at
the FDA, many of those trials that were
designed were used baying in some way.
You know, many of them
borrowed from external data.
So I think that really helped lot
with the getting more familiar.
Um, designs in front of the FDA,
knowing, you know, what sort of
simulations we might need to show
them to get them comfortable with the
analysis and those types of designs.
So I think that really helped
move things forward as well.
Scott Berry: So an interesting
question, Kurt, might be.
What use is there A frequentist
statistics in clinical trials, and I say
that only from the sort of perspective
that I, I do think in, in almost all
of science, the perfect scenario for
a P value and maybe the only scenario
left in science where people think it's
relevant is a phase three clinical trial.
And in part it's because it's
so prospectively defined.
You know, in every other scientific
field there's p hacking going on.
There's sort of abusing P values or
misrepresenting them and, and Bayesian
is making Hughes inroad in all of
that in clinical trials that still.
Uh, uh, if somebody came to me and
wanted to run a fixed trial of a pretty
standard endpoint phase three trial, it's
one of two adequate and well controlled
trials, I tell 'em not to do base.
You know, you're, if you're not, if
there's no reason for that, if you're
not bringing in the extra information,
you're fighting an uphill battle.
within that, I mean,
that's sort of place that.
You, you know, in that scenario.
Now, if you're talking about a rare
disease, if we're talking about
disease progression models, a LS
modeling multiple inputs, you know,
you get out of that mold of where.
Frequentist hypothesis testing.
It's really the last part left
in clinical trials of that.
And as you move off of that to learning
trials, to adaptive trials, to comparative
effectiveness trials, you know, all of
this, you, it kind of loses, any kind
of benefits and, and Bayesian has a good
bit of value to all of those scenarios.
Kert Viele: So you got a little
too close to my pet peeve, Scott.
So I'm gonna mention it.
Um, the, uh, so there, there's a kind of a
closet industry right now where I run, uh.
A regular frequentist trial, you
know, I, I want p less than 0.025
and I get P equal 0.0
4.07
one-sided, so it's not significant.
And then what I do is the following.
I go and write a paper and I look
at priors and I come back and
go, okay, now the prob posterior
probability of superiority is 93,
94, 90 5%, and now I want approval.
The Bayesian design, it's often
presented as, oh, I've, I've now.
Added something that has made this
approvable, whereas the frequentist
design didn't bring it out.
So I, I know that I, I, I
have two strong opinions.
I'm gonna let you guys take a shot at it.
So is this a good thing for the
world, a bad thing for the world?
I.
Scott Berry: Uh, mixed.
Uh, we, we being bay, and
it's not uncommon that
somebody runs a failed trial.
And, ooh, let's call the bayesians.
You know, maybe they'll
give us a different answer.
Uh, in that scenario, I think you're
largely describing this scenario
where, where we would probably be on
the side of regulators that this is
not approvable, uh, in that scenario.
Now, there are, there are
surely some where what, what I,
what bothers me is a trial as.
S uh, black and white success
or failure, that's it.
What would be really interesting
for your scenario okay,
suppose it's not approvable.
What does my next trial look like?
Do I need a whole new trial of the
same, a bigger sample size to get below?
could I prospectively
use that in a new trial?
But, but I agree with you.
This sort of last grab of cherry picked
information to recreate a prior recreate
and show success think is bad science.
Kert Viele: So, and you, you bring out,
you know, related to this discussion
of lowering the bar earlier, you
know, we talked about this trial, we
wouldn't be saying the trial's good
enough in and of itself, but we'd be.
Saying that potentially you could
combine the information that you
currently have with new prospective
information, and the totality of the
evidence may reach or exceed the bar
that you want in the first place.
So the goal is to essentially
use the Bayesian methodology to
reach the high bar, just not with
one trial, as the case may be.
Scott Berry: Yep.
Kert Viele: So, Melanie, do you
have examples where we've done this?
Melanie Quintana: Ooh, putting, put
together the totality of the evidence
Kert Viele: Yep.
Melanie Quintana: prospectively
or retrospectively?
Kert Viele: Both.
Whichever, whichever comes to mind.
Melanie Quintana: That's
a, that's a good question.
Oh, I think Scott has a good
example of an FDA approval.
Yeah.
Scott Berry: give Melanie time
to, to think of other scenarios.
But yeah, we have had scenarios in
really exactly in this case where, uh.
Rebi OTA an FDA approved for the
treatment of a rare disease, uh, c diff
C difficile infection, c diff infection,
and the primary at the end of the phase
three trial was prospectively to combine
together phase two and phase three.
Um, it, it's really interesting you go
to the advisory committee meeting on
the discussion of this, but a bayesian
analysis combined together both of
those ended up jumping a 99% probability
hurdle, combining both of those together.
Yes, it was a rare disease.
There are particular circumstances
about the inability to enroll these
patients, uh, in this scenario.
So there was a need.
Two, do something better the scenario.
If you go to the FDA label for
Rebi ota, it's a faring product.
are only posterior probabilities.
There are no P values in the label.
We have some, we have some device
examples where this has been done.
We're actually working on some.
Additionally, you can imagine
the scenario being a regular,
suppose you're a regulator somebody
comes to you with a trial design.
It's borderline and you actually
probably debate long and hard
amongst the people at, at the
regulators, it's just not there.
close, it's just not there.
Do you tell the sponsor it's as though
you're at times zero row that you have
no credit for this, even though we
think it's close and you the treatment
works, but we can't approve it yet or.
Should the next trial be different?
Should it include the information?
Should, should this be incorporated?
I mean, I think it has to be incorporated.
So those types of scenarios
are not uncommon where you're
trying to utilize all of the
information to make better answers.
Melanie Quintana: Yeah, and I think
Scott's kind of talking about the end
goal and the, the best, you know, the best
examples to show that Bayesian can help us
get across the finish line for approval.
But if you shift more to even like
early stage drug development, Bayesian
models can really help us if you
should carry a product forward or not.
So there's many examples where we'll take.
Maybe in a rare disease with multiple
endpoints, we'll synthesize sort
of all of the available information
and get a good sense of does this
treatment actually work or not?
We'll show that to the company.
It helps them make a, a phase one,
two decision, or two, three decision.
Oftentimes, you know, if the company
is needing investments, we'll do those
analysis so that investors can get a
sense of the totality of the evidence.
There's a great need for using Bayesian
in the early phase development too, to
really synthesize all of the available
information and make a go no go decision
or the information to investors.
I would say
I.
Scott Berry: So you act as though
you, you have one pet peeve, Kurt.
Do you only have one pet
Kert Viele: I only have one pet
peeve for this, for this podcast.
I try to get one out every podcast.
I'm saving all the rest.
So,
Scott Berry: One per podcast.
Yep.
can I, can I throw out a, a really
cool application of, of bays?
Uh, uh, so I, I, uh, maybe I'll
save that example, uh, for, for
a different, a whole podcast
Kert Viele: uh, so you have
one good example, perk podcast.
Scott Berry: Yeah.
Yeah.
That's all I, that's all I can do.
Every time I think of a
good idea, I do a podcast.
So I have one, one per per podcast.
I think the world is going to
where we have a much better
understanding of disease.
it's not that there's one big
disease, uh, in the scenario.
And so we're gonna be running
more and more trials where.
There are subgroups of disease that
we run a trial, and there are five
subgroups of disease that we recognize are
slightly different, but they're related.
Uh, you know, it used to be 10 years
ago, we would've run that trial with
all five groups in one trial, and
maybe the treatment failed because
of heterogeneity over the treatment
thing that we just didn't understand
the disease well enough in that we
used to enroll Alzheimer's trials with
people that didn't have Alzheimer's.
We didn't understand the
disease well enough in them.
So now if I'm running a
trial with those five groups.
Do I do five separate analyses?
It's almost within the
Frequentist framework.
I have two choices.
I do independent analyses of
the five, I pool them all.
But a Bayesian analysis of that
where you're estimating the effect in
group A based on the results in all
five groups is just better answers.
And this is, this dimensionality
of disease is going to be a huge
problem for frequent to statistics.
It's a very natural thing in bays,
and it's a huge problem for us as
scientists, but I think it's an area
that bays shines and, and as this goes
forward, this, and this, by the way,
is completely not lowering the bar.
And it's a case where it prevents the,
Ooh, we didn't work in four of 'em,
but we look kind of good in the fifth.
Bays largely would say that's
probably doesn't work at
all by modeling all of them.
And so it it, it's not
lowering the bar at all.
It's actually better answers.
Kert Viele: Okay, so I have
a second pet peeve for today.
The, um, I, I do.
So anyway, the, um, so occasionally this,
this idea has, we get pushback on this.
The idea of we don't wanna borrow,
there's often a prior involved.
We don't know what the strength of it is.
Um, we get questions such as.
Um, if you're going to ask five questions,
you know, you have five subgroups.
Okay, well, now is there
a multiplicity involved?
You know, it really, you could really get
down into the weeds of this, where, as
you said, these are the kind of mistakes
we were making all the time before.
If we approved a drug for all five groups
in a pooled analysis, presumably it
doesn't work for everybody in that trial.
We just didn't look.
Um, so I haven't even
hit my pet peeve yet.
Uh, but anyway, you know, but one of my
pet peeves though, is a lot of times this
kind of analysis got done after the trial.
So it was done post hoc.
I'm gonna look at subgroups and it's
not really, it's not pre-specified.
There's not a lot of standards to it,
and I always found it frustrating where.
To propose it as a pre-specified
design and say, Hey, I'm trying
to control the error rates.
I'm trying to make good decisions that
there's resistance, but then I can
do whatever I want on the back end.
It led me on occasion to tell clients,
you're better off proposing the pooled
analysis and all this stuff just
doing as exploratory later, which.
It kills my soul in terms of, and I,
I think we're much better off now.
I think there's been leaps and
bounds, uh, the FDA has a great use
case on their website in doing this.
Uh, but anyway, I think this,
I think you're exactly right.
The more we have subgroups.
Let me ask you a follow up on that.
Related to hierarchical
modeling and everything related.
Uh, Aaron Judge.
Uh, we talked about his
batting average in May at 400.
We said we, uh, were using prior
information and he was the highest
of a group of a number of players.
What's his batting average now?
Scott Berry: You know, I
don't know the answer to that.
I should have prepared for
this, but I will guess three 40.
Kert Viele: It's a little higher,
Scott Berry: Okay.
Kert Viele: so I haven't looked
for a couple weeks, but it was 360
something last time I looked so.
Scott Berry: Yep.
So he is regressing to the mean.
Kert Viele: He is regressing to the mean.
We're not quite sure where his mean is.
I think, uh, I think Nick said somewhere
around three 30 he was expecting,
Scott Berry: Okay,
Kert Viele: so we'll see.
Scott Berry: so disease
progression modeling.
Melanie, you've done a lot
of rare diseases and utilize
Bayesian progression modeling.
Melanie Quintana: Yeah, talking
about pet peeves, you guys
were making me think of mine.
My, my pet peeve is, you know, Scott,
you said you're stuck then to choose.
Are you going to pull or are you going
to just test within the individual?
I think a lot of times
in progressive disease.
The choice then if you pull, you have no
power because it's heterogeneous and some
of the people have totally progressed
on the endpoint you're looking at.
And so it's hopeless.
So what people do is they enroll this
very narrow subset of the population
that they think is homogeneous and fastly
progressing, but not necessarily the.
range of the population that the treatment
might work in, and they're making rare
diseases even more rare by only enrolling
in a small subset of the population.
So yeah, disease progression
models is one thing that we do
a lot in these rare diseases.
I think one of my favorite
use cases to date of
analysis is within a
disease called GNE Myopathy.
So this is a disease that is slowly.
A progressive muscle wasting disorder
kind of starts in your lower limbs and
works its way up and out of the, the body.
Um, I think the investigators at the NIH
came to us and one of the endpoints was
the six minute walk test, if I remember
correctly, Scott, I think it was.
And, and there are many people who
are so advanced in the disease that
can't even walk anymore, so it's just.
That endpoint is completely hopeless.
so we, and this was, don't know, I wanna
say one of my first projects at Barry,
so now, maybe like 12 years ago, um,
worked with them to develop a disease
progression model, looking at muscle
strength across multiple endpoints.
Um, the primary analysis is going to
look at the overall slowing in disease
progression across multiple endpoints.
So if it's somebody who's
non-ambulatory, it might look.
More and be more informed by the
muscle strength on their upper.
Limbs, uh, if it's somebody who's
just starting to progress, it might
be informed more by the lower limbs.
So it really kind of does exactly
what we would want it to do and that
it, the treatment effect is informed
by where you are in the disease.
Um, so yeah, that's one of my favorite
uses, I think of a Bayesian model.
that trial, even though it was designed.
I wanna say almost 12 years
ago going to read out.
We're gonna hear about this trial,
so we'll have a podcast on it.
I think probably the end of
this year we're actually running
that model, the trial ran.
and, and that's super exciting.
So it's a great use case of using
all of the information that you
have as efficiently as possible.
We use all of the endpoints so
we can enroll a broad population
and, uh, hopefully a good.
Use case for Bayesian.
Hopefully the treatment works, or
at least we get a definitive answer
of if it does or doesn't work.
Scott Berry: Foley Bayesian primary uh,
Melanie Quintana: Yep.
Scott Berry: uh, uh, based on,
uh, disease progression model.
I,
Melanie Quintana: Yep.
Scott Berry: think that.
Where I talked about subgroups, many,
many subgroups, were learning much more
about the heterogeneity of diseases.
We're also going to be stuck with much
more, um, uh, much better endpoints.
And you know, think about
wearables in a scenario.
Think about the complexity
of the endpoints.
I think we use a lot of very, very dull
instruments in clinical trials to identify
whether treatments have an effect.
Uh, and.
We have a great deal of ability
to use endpoints simultaneously
for understanding effect.
know, there, there, imagine a trial
where a single endpoint shows in effect
and the other ones secondaries don't.
Now you're sort of stuck with, well,
the totality of evidence, what it means.
But maybe in the integrated analysis or
combining together multiple endpoints,
much like your trial does, is.
Gonna be hugely more valuable
in incorporating, you know,
the totality of evidence.
Kert Viele: And by the
way, Aaron Judge 3 49.
Scott Berry: 3 49.
Yep.
Regressing as we speak.
Melanie Quintana: All right,
Kurt, I'm gonna turn it on you.
What's your favorite?
Kert Viele: Oh no.
Melanie Quintana: Bay use case
in a, in a clinical trial.
Kert Viele: My favorite bays
use case in a clinical trial.
Ooh.
Uh, so I'm certainly, I do borrowing.
Um, so that's definitely all of
my use cases refer to borrowing.
Uh, we've done.
Uh, actually kind of a regret, uh,
a trial that never actually ran.
We had a wonderful experience
designing an antibiotic trial, which,
uh, talked about external evidence
and also talked about borrowing.
Uh, between there, there you can have an
infection at different sites in the body.
So you could have a UTI, you can have
a, a lung infection, an abdominal
infection, a skin infection.
And we talked about borrowing
amongst those and it had some
interesting aspects because.
A lot of antibiotics.
If they don't work in a specific
part of the body, it's probably
the lung, uh, because that's one
of the hardest places to get to.
So we were talking about ways to
actually let the lungs split apart
from the, that, in the modeling.
Uh, but that trial, we had
great interactions with.
FDA, it was part of a grant that
actually involved FDA review, uh,
but it did not get off the ground.
As it turned out.
It had funding issues after, uh, uh,
some changes in funder priorities.
So, but that would've been my favorite.
Scott Berry: What about,
what about the ROAR trial?
I would've guessed the ROAR trial.
Kert Viele: So Rohr is,
roar is a fascinating story.
So Roar is a.
It's a basket trial, meaning they looked
at a drug in multiple rare cancers.
Um, and it had borrowing
across the subgroups.
Um, certainly that was, I would call
that my most successful Bayesian trial.
Um, certainly multiple approvals.
Uh, this was the first drug
that got what's called a pan.
Uh, approval.
So it's approved for any cancer
that has a particular biomarker.
So at some level it's a, it's a pass
setter in that, um, the interesting
thing about that trial is the, the drug
is, it's one of those trials where the
drug is so awesome that any analysis
would've gotten the same answer.
So, so there is, there is that aspect
to it where I can't claim that the
Bayesian put it over the top, so to speak.
Scott Berry: Yep.
Yep.
Kert Viele: I think we're running into
Melanie Quintana:
would've the same answer.
I have another question.
Kert Viele: Uhhuh,
Melanie Quintana: How do you
feel about mixing Bayesian
infrequent analysis, Scott and.
Kert Viele: so can you give an
example of what would be mixing.
Scott Berry: You mean at the end of the
trial that some of them are in IC, so,
Melanie Quintana: Yep.
Yep.
Scott Berry: but what's clear is we
need to make Melanie the host next time.
Kert Viele: Yeah,
Scott Berry: she asks
better, better questions.
Kurt.
Kert Viele: sorry.
Scott Berry: So, uh, hey, part of
this is a, probably a reaction.
The New England Journal of Medicine
has multiple times to us come back
and said, if the primary analysis
is Bayesian, everything is Bayesian.
And I, I don't care if you set it up
in this app to do it one way, we're
gonna make the whole thing Bayesian.
So we used to do some of them.
Bayesian, but not all of them.
Bayesian.
Now, in circumstances where this
could potentially be there, we've,
we've created the entire trial from a
Bayesian approach, so I, I, I like it.
I like that if you're gonna do
a Bayesian, let's do it all.
Bayesian.
In that circumstance, there
are certainly scenarios where
mixing them and providing both.
Have benefit to understanding
what's going on.
So I don't take a high road that
you know that two of them can't
exist within a similar report.
That there's not value in both.
Kert Viele: So I don't, I don't know
what I, I certainly, I agree with you.
I like that they all Bayesian.
If you can do it, I don't know
that this is solving, you know,
to forbid the mixing, I don't
know that it's solving a problem.
You certainly want to pre-specify.
I don't want to do, Hey, I'm gonna do
the Bay, I'm gonna do the frequentist
and I'll report whichever is higher.
That's not, but you
end up in a case where.
Different people are doing the primary
as opposed to all the safety analyses.
They have different expertise.
I don't know that, anyway, adding
a ton of extra work for what feels
like an aesthetic choice anyway.
I'm not gonna, I, it, it feels
unnecessary to me to worry about it.
Melanie Quintana: Yeah, I mean, I, I
think it can be really helpful sometimes
if you have a complex Bayesian model.
To show some more simple traditional
frequentist summaries to maybe
the complex Bayesian model.
Maybe this GNE myopathy model feels
really black box and complex and
people don't know what's happening.
So can I pull apart the pieces and
show some simple frequentist summaries
that people can say, Hey, okay,
maybe the, the Complex Bay model,
it has a lot greater precision.
We get a strong posterior probability.
Maybe we're seeing directional effects
in these frequentist results, but that's
making me believe your complex bium model.
So I have no problem with, mixing them.
I think sometimes it's nice to
show that you get at least not
completely different results with
the the two types of analysis.
Scott Berry: All right, so.
We're, we're, we're being Bayesian.
in the interim and
let's, will pick this up.
This conversation will continue.
as we talk about Bayesian
non Bayesian trials.
We bring you examples, but today we were
in the interim talking about Bayesian.
Kurt, thanks for hosting.
Melanie, thanks for joining us and,
and taking over the host role for Kurt.
That was wonderful.
Melanie Quintana: Thanks.
Scott Berry: And thank you all till the
next time we are here in the interim.
Listen to In the Interim... using one of many popular podcasting apps or directories.