Episode 46
· 43:20
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: Welcome everybody.
Back to, in the interim, I'm your
host, Scott Berry, and my, my
usual partner in statistics here.
I was gonna say partner in
crime, but that sounds bad.
Kurt,
um, partner in statistics.
Kurt Veley is with me today.
So today, you know, yest,
yesterday was before.
Tomorrow's after.
We may always consider.
Before the FDA draft guidance on
Bayesian statistics before and after.
So Kurt and I are here to talk about
the, the very much anticipated.
Draft guidance, it's draft, uh, review
comments, but the FDA put out a guidance
document, the use of Bayesian methodology
in clinical trials of drug and biological
products, guidance for industry.
And that is recently out.
So we're gonna talk about the guidance.
By the way, interestingly in that
comment, it says, drugs and biologics,
CDRH, the Center for Devices has
had a Bayesian guidance out for
many years, and so this is now drugs
and biologics with the guidance.
And I think if you're a, if you're a, um.
Uh, a a a listener to this program.
If you've done many interim analyses
with us, you know that we are Bayesian,
we think Bayesian, we do frequentist
trials, but we think Bayesian, we think
there are many curs cases where there're,
there're uh, really good techniques.
And so we're excited to have this
document out and we're gonna provide
our, our our reactions to the document.
We're gonna talk about pieces of it, so.
Kurt, your first read, what is your
reaction to the Bayesian guidance?
Kert Viele: So I, you know,
as a Bayesian, I think most
Bayesians are celebrating this.
I think all Bayesians
are celebrating this.
Um, this has been something that we've
been working on for, you've been working
on it for 20, 30 years, so I've been
working on it less, but certainly decades.
And so it's good to see, um, all of
the things that we've been working on,
all the different designs that have
been developed over the last 15 years.
This is really.
The a clear message from the FDA
that these designs are acceptable.
It goes through how to do them
rigorously, how to maintain scientific
standards while these are happening.
It goes through lots of examples
on here are cases where we
think we should use Bayesian.
So all of this is great, and I think it's.
It's really important for the
FDA to say with a clear voice.
We will take Bayesian designs.
We have taken Bayesian
designs and let's, let's go.
Let's do more.
Scott Berry: Yeah, I, I had a
similar reaction to it, you know,
maybe I had a prior expectation for
what the document was going to be.
And, uh, this, uh, I, I
find it very, very positive.
Um, I think it's very well done.
I think it's thorough.
I think the, and, and I'll
contrast it a little bit.
You and I talked about, and you can
go back and listen to the podcast
where we talk about the, the ICHE 20
on adaptive designs that came out.
it, it it exists and it's a
positive thing, the ICE 20,
but there's almost this, um.
Uh, you know, be careful
what you're doing.
Almost this negative.
If you really, really wanna do it,
here's the way to go about doing it.
I was almost sort of pessimistic
in the way it phrases things,
uh, overly concerning this
document, I don't think has that.
I, I think it's optimistic.
This is a, a, a, a, a good
positive way to go forward.
I think it's appropriately, um,
uh, providing guidance on, on
cases in which you can do it.
So I think it's, I think it's,
it's a really good document.
I think it's a very beneficial guidance.
Kert Viele: agree with all of that.
Scott Berry: okay.
So, um, the, the various pieces to
it, first of all there, the, the,
uh, when it came out, the FDA,
commissioner Marty Ery, uh, doctor.
Marty Ery had a few quotes about it.
So this was thought to be important enough
that the FDA Commissioner provided quotes.
I, I found a couple of those.
I think there's a short video
that he actually came out with,
which is, which is really awesome.
But Bayesian methodologies help
address two of the biggest problems
of drug development, high costs and
long timelines, providing clarity
around modern statistical methods.
We'll help sponsors bring more
cures and meaningful treatments to
patients faster and more affordably.
So, of course, very high level.
This is good for patients, good for
sponsors, uh, but important enough
that the FDA Commissioner makes a
comment about what some may think
of as a statistical guidance, which
is also a very positive thing.
Kert Viele: Well, I think one of
the key issues here is just how
much, you know, we've spent a lot
of time synthesizing data, just the
world's just awash in data floating
around, you know, historical studies.
We see all these meta-analyses we see.
You know, the things that the guidance
talks about, adult patients, pediatric
patients, synthesizing information
and not reinventing the wheel is just
a key part of what we do every day.
And so this really, at some
level, what this guidance is
about is how do we synthesize
information from multiple sources?
How do we put things together and
how do we do that in a rigorous way?
And so we would expect to see advantages
to both patients and sponsors.
When that happens.
Scott Berry: Yep.
So both of us are very positive on this.
I, I I think it's a,
it's a great document.
Let's talk about what's
in there, uh, Kurt and they, they.
They make a strong distinction,
which is kind of interesting.
Very early on in the document they make a
a strong distinction between trials where
there's a claim of type one error control.
And trials where there's not, and trials
where they're not are tied to scenarios
where external data is being brought in.
As part of it.
Bayesian is being used to
synthesize external data.
There's information.
And it very clearly states in those
scenarios, type one error control is
I, I'll say, sort of awkward or less
relevant, and in early distinction between
those two cases of Bayesian, which by
the way, they're both reasonable cases
where Bayesian could be used, where you're
still controlling type one error and
those where external data are brought in.
I thought it was really interesting that
that distinction is made very, very early.
Kert Viele: Well, I think you know this,
there's a history here on how we got here.
Um, you know, we've done a lot of
adaptive designs over the years.
We've driven those by Bayesian
aspects in many cases.
You know, you don't have
information coming in.
You're doing a lots of interim analysis.
You don't wanna make false decisions.
You know, we do want to put.
Only good drugs on the market.
We want to fail bad drugs quickly.
The type one error control has been,
you know, a standard part of drug
development for years and years and years.
And so when you start
talking about external data.
Um, this has been, you know, if
we talk about the last decade,
it's been a struggle to try to put
external data into the same box
where there's type one error control.
Because, and I think the guidance, the,
the new guidance says this upfront.
If you already have evidence that
a treatment works, then you know.
You're not necessarily, you don't need
type one error control for your new trial.
You've already partially disproved the
null and the current trial is to get you
the rest of the way there, to have the
totality of the evidence be sufficient.
Rather than just relying
on this single trial.
And so, you know, we've spent a lot
of time with FDA and other regulators,
you know, trying to say, well, here's
what it does to the type one error.
Is this enough?
And it's always been an
awkward conversation.
And one thing this, this document's
clearly hinting at where to
go is let's just move into the
brave new world where we can just
synthesize the evidence directly.
And I think that's a huge step forward.
Scott Berry: Yeah, so they What?
One of the quotes from the guidance,
and this is related to what Kurt
brought up about if we're bringing in
external data, it says, by borrowing
information, one is assuming that
the borrowed information is relevant.
Evaluating the degree of borrowing
based on the expected outcome when
there's no effect under the null is
philosophically inconsistent, given a
prior, which assumes a non-zero effect.
So the, the whole point is
how, how much do we care about
what happens under the null?
If we've got evidence that
says the null is unlikely, and
here's a, a simple example.
Suppose a company has run a phase III
trial, rejected the null and has strong
evidence, and is running another trial.
I.
You know, what, what does it mean
to think about type 1 error in that
circumstance when the probability
of the null is already very low?
And so that's, that's the notion.
And from a pure statistical
standpoint, if you're bringing in
external data that is positive.
And that data is used, you
generally are not controlling type
1 error under the, this new trial.
If the null is true, you can get this
inflation and it very strongly says,
you know, we're gonna calculate that,
but it's not necessarily constrained.
And these are really the two
separate cases of of Bayesian.
Kert Viele: Yep.
I think it also gets
at just what you need.
To do for the rigor there.
There's lots of discussions.
You know, I think over the past,
I don't know how many of these
borrowing trials we've done over
the past decade, but it's a lot.
And you know, originally our discussions,
we talked about the methodology.
Um, more and more of the discussions
today get into, are we comfortable
with what data you're borrowing?
And that's a key part of the.
Um, the guidance here essentially says,
you know, if we're gonna allow you to
inflate type one error or discard the
concept itself of type one error, you
need to be bringing us real information
that has validity, patient level data,
make sure your covariates makes sense.
All of that is in there.
And that's been more
and more a key part of.
Our discussions, and I think you, you,
you talked about it being a statistical
guidance, it's also a bit more than that
in terms of just the scientific guidance
over, you know, if we're gonna synthesize
information, what is information,
what's relevant and what's not.
Scott Berry: Yeah, they talk a lot
about, and I, I think it's a case that
our, our listeners can understand that
it's, it's hard to refute such a thing.
They talk a lot about
pediatrics and adults.
There are they.
One of the more common ways in which
Bayesian borrowing has been used is
to estimate an effect within pediatric
populations using results from adults.
So, uh, circumstances, there are a
number of scenarios and, and, um, uh,
diseases that are much more common in
adults than they are in pediatrics.
A a simple example of this, and it's
it's come up, is acute ischemic stroke.
I was unaware that actually,
uh, adolescents and pediatrics
can have an ischemic stroke.
They're very, very rare.
So adults, it's a very,
very common scenario.
Um, a huge burden, um, that, that
many, many therapies are working on.
But we have treatments that
are effective in adults.
And now you might have tens
of patients in pediatrics.
This happens in a number of things that
are quite rare in pediatrics, and we want
to estimate the effect in that group.
We can't enroll a trial of 600
pediatrics where we enrolled 600 adults.
We have information about efficacy.
We, we still want to collect the
data, we wanna run the trials,
but we might wanna borrow.
Uh, and, and really this is a
statistical way in which the effect
in pediatrics can be estimated by
using the information in adults.
And this is used in a number of examples.
It's been used in, in a number of,
uh, FDA examples already, and they
highlight this as a case of borrowing.
A hugely valuable role of
Bayesian and now they extend it.
I, I found it really wonderful
that they spent a good bit of time
talking about the role of estimating
the effect in one patient group
subgroups of related diseases.
I think we're very much in the
future and the present, where we're
looking at a particular indication,
and there are related indications.
It can be severity of disease, it can
be genetic, uh, uh, classifications
of the disease, slight differences
of the disease, but they're related.
And you, you, you demonstrate an
effect or you have data in one.
It's relevant to these other groups,
and they spent time on this, and they
opened this up as very much a place in
which Bayesian modeling can be used.
I thought that was fantastic that
they highlight that scenario.
Kert Viele: Yeah, and they really,
they, they highlighted it under, I
think, a couple different situations.
They're, they're related, but what
we've worked in, you know, one
of these is basket trials where
you may have a targeted therapy.
Um, you know, we've, we've worked
on the ROAR trial, that was for
BRAF positive patients, and in that
the groups of patients in the trial
are where the tumor is in the body.
Um, so, you know, is it in ovarian?
Is it thyroid, is it lung?
Is it at various locations
that the case may be?
And you know that those
trials, we got a pan tumor.
Um, we, of course the sponsor, I like
to view us as part of the team always.
Um, but you know, a pan tumor approval.
So essentially you can treat
any BRAF patient with this drug.
Because of that borrowing between those
groups that they reviewed as related.
Um, we've also done trials where the
FDA's interested in sensitivity analyses,
where you've got, say, more standard
subgroups, severity of disease, um,
you know, race, sex, whatever the case
may be, where we've said, you know, oh.
This group, it only has 12 patients,
but the result is a little bad.
Should we worry about that or
should we view this as it's more
in line and it's just noise and
we've done analyses in there.
It's actually the same models.
But the FDA does distinguish them
as slightly different questions,
and I think that's valuable that
we'll, especially as we partition
diseases more and more how we do this.
Scott Berry: Yep.
Yep.
They, they, they highlight one of the, um,
CID projects, which was for, uh, seizures.
And there are, there are different.
Classifications of patients that have
these seizures and they're related.
And they had run trials and had
approval in three subtypes of this
disease, and they were now looking
at a fourth subtype and they had very
homogeneous, uh, estimates of efficacy.
Uh, I believe 25 to 30% reduction in
seizure frequency in in these three.
And now they're running a new
trial in a new population.
And really it's scientifically,
should they have to run a 2.5%
alpha control trial for every one of
these new groups or some level of barring,
very similar to the basket trial you
talked about is another circumstance.
So, uh, this, this, I think it's a great
use of Bayesian methods moving forward,
and I think they'll be more commonly used.
Kert Viele: Yeah.
Scott Berry: So the other side of this are
Bayesian trials where you're controlling
type one error and they make a, a comment
in here, which I, I think is positive.
They don't spend a ton of time on
adaptive trials, but they do say that, um.
When you're calibrating type one
error and you're making a claim
that the trial has say 2.5%,
one-sided type one error, such an
approach may be appropriate for
designs where Bayesian approaches
are used not to synthesize multiple
information sources, but instead to
facilitate complex adaptive design.
Kert Viele: So I, I think this is
actually one of the, it's been an
undercurrent of a lot of guidances
and why a lot of times people ask us,
well, why should I be Bayesian at all?
And adaptive designs are
one of the key issues.
Where, you know, the, the first
thing is just the whole paradigm.
The idea that we collect evidence,
we update, we collect more evidence,
we update, we collect more evidence,
we update that looks exactly
like the interims in a trial.
But the other thing that goes on,
which is really important is, you
know, there's this standard example
that, you know, if you're, if you do
sequential analyses in grad school,
they tell you, you know, if I.
Collect 40 observations and I
get 20 successes, and I need
to compute a P value for that.
If I ran an experiment where I collected
40 observations and counted the number
of successes, I get one P value.
If I ran an experiment to collect 20
successes and I measured the number
of trials, I get a different P value.
That doesn't happen with a Bayesian.
And so the notion is this
gives you some flexibility.
So if you're a Bayesian, you can go
ahead and you get the same answer
regardless of how you design the trial.
And so the ability that we've had so
far to use this while also maintaining
type one error control and regulatory
standards, this has been, you know, a
very simplifying assumption that it's kept
things interpretable and, and feasible.
Scott Berry: Yeah, and it's
kind of interesting and, and you
made a comment that I've been
sort of doing this for 25 years.
We've been building trials like
this, I've been for 25 years and it's
amazing to think back then to now
where simulation control of a trial.
Was very rare.
We actually did it even 15, 20 years ago,
but it was very rare, very controversial.
I, I think it was sort
of dismissed by a lot.
And now here it is very clear,
represented, uh, in a guidance document.
And we, we've had other of these
complex, innovative designs,
guidance documents, but very clearly
that this is a scientifically
acceptable thing to do, which
is, which is great to see.
Okay.
So, um, they, the other thing that
a, a big part of this and, and the.
The, uh, if you look at how much time and
effort the guidance talks about different
parts of it, it spends a lot less time on
the cases where you're not borrowing data,
but it says Bayesian can be used there, it
can facilitate more complicated designs.
I think that's all straightforward.
It spends a lot more
time on the borrowing.
That in this case, that bays
equals borrowing part of thing,
and if you read a lot of it, it
spends a lot of time on that.
And I think that's not
inappropriate, that those are harder.
Scientific cases, but it
spends a lot of times.
I think it's a really valuable document
also for thinking about when, in what
circumstances is borrowing reasonable,
the impact it has, and this is almost
bigger than the statistics part.
I think this is, the scientific
guidance is also helpful for that
idea of bringing in external data.
Kert Viele: No, I, I agree completely.
Um, I do think it's important that
they have said repeatedly that Bayesian
is, uh, available for all designs.
They talk about this in several parts
that, you know, they, they basically
are upfront that we're emphasizing
the things that we think you're
gonna have the most discussion on.
While saying you can do what,
whatever, you know, what you need
to do to bring in external evidence.
Um, I'm hoping this will finally
put to bed the, we occasionally
get the oh FDA doesn't take adapt
or doesn't take Bayesian designs.
We get Don doesn't take adaptive
and it's sort of like, well we've
been doing this for 15 years.
Are you sure?
So.
Scott Berry: Yep.
Yep.
Okay.
A couple of the interesting, maybe,
maybe, uh, uh, things it does that
I, I, uh, maybe more statistical.
It, it defines, uh, an analysis prior
and a design prior, which I think
was kind of interesting that it does.
So, so what is an analysis
prior and what's a design prior?
Kurt?
Kert Viele: Uh, so, uh, the, the
important, the analysis prior is the
thing that's actually in the protocol.
This is what drives your inferences.
When you say the trial will be a success
if the following condition is met.
That's the analysis prior.
There's one of 'em.
It says, this is the
one that we've decided.
We've got agreement with the
FDA, the sponsor's okay with it.
Uh, we feel this is the.
Right, the best answer we can have
right's always a loaded term for Bayesian,
but it's the best answer we can get.
Um, but of course, you know, uh,
everybody always makes the obvious
point that Bayesian analyses
are sensitive to the prior.
If you pick a different prior,
you'll get different answers.
So one thing that we want.
Do is we want to get an idea of
that sensitivity, and we wanna
get an idea of that sensitivity
for a couple different reasons.
One is, um, when we get data at the
end, we want to make sure that if we
had different people, I know you're a.
Your advisor, Jay Cade, has
a lot of examples of a trial.
You should always think of an optimist
and a pessimist, and you run the
trial until the optimist and the
pessimist finally come into agreement.
You know, whether that's.
What, however it lands, the drug works,
the drug doesn't work, whatever it may be.
So you wanna make sure that the, the,
you've got that kind of agreement.
So different design priors you,
I, I personally, you may think
of them a little differently.
I think of them as different people
that might be analyzing the trial.
The other thing that you care about here
is we wanna do operating characteristics.
How well does the trial work?
Does it answer the question?
And knowing that our prior may
not completely reflect reality.
We wanna make sure it's robust to that.
So we want to make sure, hey, if
our prior isn't correct, we're
still gonna get the right answer.
We're still gonna treat
future patients well.
We're still gonna estimate
parameters effectively.
Uh, it talks about, um, you know,
mean square error of estimates, but we
want to, this is a robustness issue.
Robustness to the prior and being
able to convince a broad audience.
But analysis priors are what the trial
actually is, how it drives decisions.
And design priors are a lot of
different other perspectives to make
sure that it, it's robust and gives
an answer to everybody that satisfies.
Scott Berry: Yeah, so I, I could
simulate the trial with the analysis
prior as part of the design part of
the analysis, where a lot of times we
assume that the parameter has a specific
value that probably of success is 50%,
or it's 60% or 70%, and we assume that
now it's looking at bringing in also
that there's a distribution to that.
Okay.
And especially if we're bringing in
external data, we may come in and say
the, the, the distribution is centered
on 70%, but has this variability,
how is the trial going to behave?
Which now becomes a really interesting
thing and the document lays this out
as different design prior, which it
does give reference to skeptical and
enthusiastic as as you referenced.
So I thought that was interesting
and that might change a little bit.
Our behavior and what we put in or
what people are asking for is, is that
terminology, which I think was helpful.
Kert Viele: Yeah, no, I agree.
And we, we've done this, we, we've
talked about if you just run a phase
two trial, you get a broad confidence
interval for the effect you're running
Phase three, you know, if, if the drug
truly works at the top end of the.
That confidence interval,
you need a tiny trial.
If the, if it works at the
bottom end, usually the bottom
end, it's not even effective.
And so you want to think about,
you know, the whole distribution,
the range of those values.
I don't think this is gonna change.
You still wanna make sure.
That individual parameters work
well, you know, the futility
rules apply to part of the scale.
Success rules apply more.
It's on the effective size.
We're still gonna look at individual
points in that distribution,
but the average behavior's
gonna be really important.
Scott Berry: Hmm.
A, another interesting terminology
they used, and I I, it'd be
interesting to find out more about it.
We, you and I love, uh,
Bayesian hierarchical modeling,
Bayesian borrowing the term.
A lot of times when you're using models
like that, a Bayesian hierarchical
models, we talk about borrowing,
we talk about sometimes shrinkage.
The FDA guidance uses
the term discounting.
Kert Viele: Yep.
Scott Berry: I, uh, we, we very much it.
If, if I have an adult population
of a thousand patients and I have 50
pediatrics, and I'm estimating pediatrics,
I'm not gonna lump a thousand adults with
the 50, they'd be completely overwhelmed.
So most of our Bayesian techniques.
Discount the thousand.
They borrow some level of that.
They're really one in the same.
But it was interesting, the choice of term
that the FDA refers to this as discounting
rather than shrinkage or borrowing.
Kert Viele: Yeah, so, and um,
obviously I use the DI dynamic
borrowing a lot in talks.
I think we have, I think that
probably is the more common
term here than discounting.
Um, I tend to think of discounting in
terms of if I have a borrowing from
a population and I say every patient.
That I'm borrowing counts the
same as a fourth or a half or
whatever number you agree with.
Uh, typically with FDA, what do we agree?
We agree something like one fourth to two
thirds is a common range to, to do here.
It depends on the situation,
but anyway, you, you basically
give a weight and you say, okay,
that's how much I'm gonna do it.
Um.
The models that we fit now we actually
allow the data to determine what
that value is, you know, should, and
the notion is if the data agree, our
current trial agrees with history,
we're gonna go with a higher weight.
If.
It doesn't, then preferably we'd
like that weight to be zero, and we
wanna say, look, this doesn't match.
We're in, you know, the different set of
patients, they're behaving differently.
We don't wanna borrow anymore
because of what we've seen.
And so this is this idea
of dynamic borrowing.
They've related it to dynamic discounting.
Um, we've even seen this in
multiple stages where people will.
Do, they'll pick a weight and then they'll
separately do a analysis with that weight
rather than an encompassing design.
Uh, but essentially that's
what it's getting on.
I think we've talked about this before.
Uh, CDRH has a lot more comfort with
static borrowing in a lot of cases,
and I think that's because devices
are just different than drugs.
There's.
Less reason to think you're
gonna have these, oh, it's
completely different situations.
But, uh, it was an interesting
aspect of the guidance and emphasizes
some of the differences here.
Scott Berry: So we, we use the
term dynamic borrowing all the
time, and now the, the guidance
refers to dynamic discounting is
the term, and I think they ge I think
the guidance document generally.
Gives a positive to the dynamic
discounting as opposed to static.
One of the things it says, dynamic
approaches are used in many cases due
to the more advantageous operating
characteristics, such as with respect
to bias and mean squared error
resulting from the lesser borrowing
in cases of prior data conflict.
Kert Viele: Okay.
Scott Berry: So if it turns out that
things are just looking differently, we,
we, we discount them more, we borrow less.
Uh, and I think it, it gives a
nod that these are beneficial.
So that was kind of
interesting in the guidance.
Kert Viele: Yep.
I think we've also designed adaptive
trials that have changed their sample
size related to how much that prior
data conflict exists, so that you can,
if, if it doesn't look like you ought
to borrow, do more in the current trial
and make the trial stand on its own.
Scott Berry: Yep.
It doesn't, uh, one of my a a Bayesian
thing I love are predictive probabilities.
It doesn't say predictive
probabilities, but I think that's okay.
I don't think we're putting
predictive probabilities in labels.
I don't think we're using the analysis.
Prior, for example, and the, the posterior
is usually driving those conclusions.
They drive a lot of adaptive designs.
They're
used for fertil futility, but I think
it's okay that it doesn't mention them.
I don't, I, it, it certainly
doesn't mean they can't be used as
a utility in our adaptive designs.
Kert Viele: well there's
gotta be lots of places.
I mean, we do a lot of adaptive,
we do longitudinal modeling in
order to decide whether to stop.
We know.
That, uh, six month outcomes
in a knee trial are highly
predictive of two year outcomes.
We use those kind of, we have a
Bayesian model relating those in our
adaptive trials, but as you said,
those drop out of the final analysis.
And so I think they'll appear, uh,
they're implicitly mentioned when we
need to think about all those priors
in the guidance, but it's less of a,
you know, driving the main conclusion.
Scott Berry: Hmm.
Uh, I, I was a bit surprised.
There's a couple places in there
where they talk about elicitation
of expert opinion, and I was a
little surprised it was in there.
I, I think it's great.
Um, we have not done this much.
We've not been involved in too many cases
of this, but I think it opens up the,
the, the door of this being a possibility.
It says, in some cases it may
be helpful to conduct a formal
expert elicitation exercise.
With subject matter experts to
incorporate the degree of consensus
in the level of borrowing.
That's a particular, uh, to that.
So I think that that's kind
of an interesting part and it,
it'd be interesting to see if
this is something used more.
Kert Viele: Yeah, I think there's, there's
a whole opening here on, not opening's
the wrong word, but basically this has,
this has opened up the elicitation and.
Um, eliminating type one error control.
This is things that the FDA
hasn't done a lot of in the past.
We've really been in type one Error world
for a while, and so we'll be curious,
you know, what will be the process.
For making that happen.
You know, I, a lot of stuff's
been written elicitation.
You know, we want independent experts.
We want to get all ends of the spectrum.
Make sure you have all the views
incorporated in there, but make sure
they're incorporated to the degree
that they represent the population.
You don't want to just look
at outliers, so to speak.
How are we gonna make sure that sponsor
A and sponsor B are treated fairly
when you're using informative priors?
If that happens in borrowing.
But anyway, this is, there's gonna
be a lot of work going forward in
here to implement those aspects and
I'll be, I hope we're a part of that.
Scott Berry: Yeah, and
it'll be interesting.
A couple, a couple interesting
sort of the impact of this.
Let's
think about the impact of
this guidance document.
Uh, and other times I've found myself
almost defensive when the guidance
document may not be overly positive
of is this gonna have an impact?
Um, and I've all, I usually say, look at.
Nowhere does it say I can't do something,
and we we go forward and do it.
Here's somewhat of the opposite,
that this is very positive and so the
interesting thing will be the impact.
Will sponsors see this
and open the door to this?
Open the door that the FDA accepts these,
that they're open to these ideas to it.
There's also the far extreme of this.
Does it bring out, I'll call it bad
actors, is hard thing, but cases
where people want to do things that we
think is scientifically not credible.
And we, we do get that occasionally
that they, they've tried every
frequentist approach and their
data doesn't say success.
Maybe the bayesians will say it's
successful, uh, sort of thing.
And those are not fruitful circumstances.
So it'll be interesting.
Both the relative acceptance, does
this increase Bayesian methods?
And you know, do we, do we have the
sort of bad actors that are gonna
want to do inappropriate borrowing
of data that's inappropriate and say,
oh, you say you accept data borrowing,
you know, we, we should
be able to use this.
Kert Viele: Well, I mean, we, we know
there's lots of data you shouldn't
borrow from, um, data that's very old.
Lots of data comes from different patient
populations and the guidance goes through
this in a lot of detail on, you should
say, no, I've been told point blank by
the FDA, we're fine with the methodology.
This data we don't think applies.
And it's a very reasonable
argument to make.
Um, as you said, the
thing that terrifies me.
The most is people using Bayesian
entirely to lower the bar.
I didn't win.
I got a P value of 0.141
sided.
Oh, if I say it's 86% superiority
probability, it'll sound
high and I'll get approved.
That kind of stuff worries me.
To the ends of the earth.
It's very different than the borrowing
aspect where we're trying to argue
the totality of the evidence.
You know, this trial is a piece.
What we're borrowing
from is the other piece.
We're not lowering the bar,
we're putting it together.
And I, you need people to understand
that well enough that anyway, you,
you can see people missing the.
Deliberately missing the
nuance, shall we say.
And, and we, uh, anyway, we, we've,
we've told a lot of people no.
Who have come to us with
that kind of request.
Scott Berry: Mm.
So there's a difference between
lowering the bar and the totality of
evidence jumping over the high bar.
Which, which I think this is trying
to, trying to present as these
possibilities, um, uh, absolutely
think
Kert Viele: What, sorry, I'm
gonna Scott, one more thing.
I was gonna say, you know, you think about
the adoption curve of a new technology.
We talk about early
adopters and late adopters.
I think that one key aspect of this is
with this guidance, you're now, if you're
doing it now, you're a late adopter.
It's been accepted by regulators.
You're not pushing the boundaries.
It's not, they've said,
please, please, please.
Bayesian is fine.
The FDA commissioner has said, we've.
We view it as a step forward.
So I think this has advanced
the adoption curve considerably
with its mere existence.
Scott Berry: Yeah.
Uh, you will still get some that
say, show me 10 trials that have
used this and have been successful.
Exactly.
In my field.
Exactly the thing.
So there, there will
be late, late adopters,
Kert Viele: We at least we have
trouble with, you know, have, show
me 10 trials in my field and so on.
If you just ask for 10 trials, we've got
three on the first page of the guidance.
So
Scott Berry: Yep.
Yep.
Yep.
Um, the other part about the impact
of this, so it, it interesting to
see the impact of this, of sponsors.
Um, what about the global impact?
I, I think if you think about
where the US coming out with this
guidance, I think it, it would be
considered by many, much more open to
Bayesian techniques than the ICHE 20.
They're different guidances.
It harmonization guidance is largely,
everybody agrees with this sort of thing.
Um, and so I think they're
naturally going to be behind.
I, I, think this is gonna
have a, it's going to increase
the use of Bayesian methods.
I think it's gonna show their value.
I think that these are going to be
more well understood by a broader
audience, and I think the use is going
to globally raise the, the, and, and make
changes, uh, because of the increased
use that may be driven by the us.
Uh, positive reaction to these.
What do you think about the global scene?
Kert Viele: Well, as you said,
you know, I don't think anybody's,
anybody who wasn't Bayesian.
Yesterday, so to speak, is that's
not in the US and not at the FDA.
They're not going to, you know,
change their mind overnight.
It's gonna be a data point.
Everybody has open minds.
Um, I do, I do think what you said though
is the most important part, which is.
Getting examples, more and more
examples of Bayesian things
that we actually see them work.
That's what convinces people this
trial, I saw what the benefit was.
I saw what it does, what it does
to save sample size, get cleaner
solutions, better solutions.
In many cases, the
basket trials and so on.
And so I think that that's
where we're really headed.
You know, we think this
is the good way to go.
We want to do a lot of these examples.
If there are problems, we want to
fix 'em, and we think that this
is gonna prove itself over time.
So this is an opportunity for
Bayes to prove itself, and
that's great.
Scott Berry: Yep, yep.
Uh, agree.
And it'd be interesting,
I, I think for example.
When, when we look back over the
years, I think the, the, uh, pandemic
had a huge impact on platform
trials because they were
used, they were published.
People saw them.
Oh, I get it.
You know, I, yeah.
Uh, Bayesian was used a bit
in the, in the pandemic.
A good number of examples.
Uh, Janssen's.
Vaccine was approved in a
Bayesian adaptive trial.
Um, and it's, oh, I get it.
This makes sense.
This is reasonable, has huge
impact on then the, the use of it.
I think this is, this is going to be
part of that as well as signaling that,
and maybe the existence of this document
is in part because of COVID and other
experiences and, and sort of seeing that.
So
I think this is a very positive
move scientifically for sure.
Kert Viele: And we, we learned a ton
about platforms in doing those trials.
The design that we do today is better
than the design that we did in 2019 or
2020 or whenever these came, came out.
And so, you know, the more of these
Bayesian designs we do, they'll get
simpler and standardized and cleaner.
That's all to the good.
Scott Berry: Yep.
Yep.
So it's an exciting day.
Uh, with, with the new guidance,
uh, all of you out there, um,
uh, the, the field is open.
Bring your Bayesian on,
uh, any concluding remarks, Kurt.
Kert Viele: Uh, I think I'm
just, I'm happy to see this.
We've been working on
this for many, many years.
I applaud the FDA folks that
it's a, it's a long document.
It's got a lot of content in it.
There was a lot of work going in on
all the reviews that led to this.
So we appreciate their time.
We appreciate everybody who spends time
on these kind of guidance documents,
uh, and how much effort and, uh,
thoroughness is needed to do them right.
That's, that's my concluding remark.
Scott Berry: Yeah, a very
well written document.
A, a great effort, uh, and
we're excited about it.
Kert Viele: Yep.
Scott Berry: Alright, thank
you all for joining us here.
Uh, in the interim.
Uh, until the next time we
will be here in the interim.
Listen to In the Interim... using one of many popular podcasting apps or directories.