← Previous · All Episodes · Next →
Digital Googols Episode 34

Digital Googols

· 38:14

|

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: Welcome everybody.

Back to, in the interim,
I'm Scott Berry, your host.

So I, for those of you who join
us in the interim frequently, you

may notice, uh, my background is
different than our usual studio.

Uh, it's, uh, stunningly beautiful
place and I will, I will lift the

camera up a little bit and show that
I'm actually in Southern California.

Uh, taping here next to the
grand old Pacific Ocean.

And it's beautiful.

Um, or am I,

and, and many of you who join, uh,
uh, webinars, uh, zoom meetings,

lots of people have virtual
backgrounds and a lot of people

have the same virtual background.

Makes 'em look like
they're in the same office.

Uh, so you don't know.

If I'm actually on the Pacific Ocean
or it's virtual digital and I am stuck

in the bowels of my attic in Austin,
Texas, or I really am on the coast,

so by the way, I am of course, am I?

Am I Scott Berry or am I a
digital twin of Scott Berry?

So that's my topic for today
are, are digital twins.

And it, it's been a term
used, uh, in clinical trials.

It plays a role in clinical trials.

It plays a role in ai.

What is it besides perhaps.

A picture of the, the, the Pacific
Ocean, which is not a digital twin.

A a picture of something is not
a digital twin, but let's talk

about what it is, how they're used
in trials, the potential of them.

And my viewpoint, I've actually asked
quite a bit within trials, we do a

lot of adaptive trials, hence sitting
in the interim of, of a clinical

trial, uh, right now where you are.

Or, and we do a lot of Bayesian
analyses, which can be separate

from using external data.

You can do Bayesian analyses without
using external data, but when you use

external data to a trial, the Bayesian
approach is a really powerful thing.

So we end up doing quite a
bit of trials using external.

Yeah, digital twins are external in
some sense, to the clinical trial.

So I'm asked a lot about digital twins
and so I, I, I will give you my viewpoint

today here, uh, uh, in the interim.

So the name digital twin.

Uh, is, is I, I'll tell
you what Wikipedia says.

A digital twin is a digital model
of an intended or actual real world

physical product system or process.

A physical twin.

That's the physical thing that
we're actually interested in.

It serves as the effectively
indistinguishable digital counterpart

of it for practical purposes,
such as simulation, integration,

testing, monitoring and maintenance.

So a digital twin is a.

A creation a, a model-based
creation, a simulation of a real

physical thing that's of interest.

The original term traces back to
NASA, where they're creating digital.

Simulations digital twins of
a physical characteristic.

This could be a rocket ship,
this could be a new airplane.

This could be many different, uh,
interesting physical things that

we can't do a ton of testing on.

You could imagine the Department
of Energy doing this for, uh, uh,

tests of nuclear testing where.

We can't just be blowing things up.

So we're creating digital
twins of that physical system.

Incredibly powerful, but of course
powerful to the extent of its

realism and in in this scenario.

So.

Let me give you an example of a
digital twin that you, we've never

called a digital twin before, but
it is completely a digital twin, and

it's something we do quite a bit of,
which is clinical trial simulation.

The, the Wikipedia definition of a
digital twin talks about simulation.

We, in the design of clinical trials,
we create an silico digital version.

Of the physical trial.

The physical trial is run once.

We don't want to go run 10,000
physical trials to figure out the

best way to design that trial.

What's a good trial that would be, you
know, many, many years we can't do it.

So we create a clinical trial
simulation of that physical trial.

We try to create it with an,
with all of the needed detail of

the things we're going to vary.

So we, we, we do enrollment rates,
variability to enrollment rates.

Typically a s on process of that, where
there, there's ramping up of sites.

We incorporate dropouts of patients, the
behavior of patients on the controls.

Maybe they're seeing regression to
the mean within that scenario, then we

simulate their outcomes in the trial
with variabilities to their outcomes.

We even test treatment effects in that.

So we create this digital system that
has value to us as a digital twin of that

physical system, of that single physical
trial that we run once the value of this

is now in silico, this, this simulation.

We can do things to that.

We can add response,
adaptive randomization to it.

What if we change the design to have that?

Then we run it tens of thousands,
millions of times, and we

look at how well does it work.

Does it get the right answer where
we feed in the right answer to it?

This might be very similar to what
NASA does, where they're simulating

different aerodynamics to the systems,
different weights to the system.

What if we alter the wing shape to it?

How is it going to behave?

What if we change the material to it?

That's what we're doing when we're
doing clinical trial simulation.

We vary it.

We vary the design.

And we look at how well does it fly?

Does, is it, is it
efficient in its flight?

Does it get to the right answer?

Where the beauty of it is, we're
pumping in the right answer.

At the same time, we, we get to play
the role of Mother Nature and say,

suppose the treatment has a 50% effect.

Does the trial, the clinical trial
simulation get the right answer?

Now, there are critics of this.

Um, and they're, they're critics
of clinical trial simulation.

And of course the clinical trial
simulation is only as valuable as how it

reproduces reality of the physical system.

Now, usually when we're doing it,
we put in the detail needed to

test the things we're gonna test.

We may not put in there detailed aspects
of the biology of a patient, because

that's not what we're interested in.

We're not actually PKPD do digital
twin simulation of the human body.

What happens when we give this drug
and it's half life is a certain thing.

What happens to the physical system?

And how do they forecast outcomes?

It's a beautiful thing.

That's not my specialty.

I I, I don't do that.

But we do it in a clinical trial.

We do it to the detailed needed.

So there are critics of simulation and
in some ways, if you could analytically

calculate all these things, you don't
need simulation when doing a simple

fixed trial of a single endpoint.

And you're interested in the power.

We, we can go look this up in tables.

We, we have simple formulas.

We have analytical characterization
of that physical system that

you don't need simulation.

And you can ma, you can imagine
an absolute brilliant person

that can do analytical work
way beyond my capabilities.

Say We don't need simulation.

I can do this analytically.

Now I think that only works to an
extent of the complexity of your

design, and it can be limiting
in the complexity of your design.

If you believe we have to analytically
understand the trial, it immediately

sets the complexity to, to a level
that I, I, I think is limiting.

Okay, but we want to understand this,
where we understand the role of dropouts

add really more complex designs.

10, 15 analyses.

We're dropping patient
character, patient types.

We're changing randomization.

We have multiple arms in the trial.

To a complexity that you
can't do it analytically.

And then this is a hugely valuable
thing to have this digital

twin of the physical system.

And then we alter it and
we create a better design.

So this little rub of analytically
and simulation is going to to to,

to play a role in multiple ways.

Anytime somebody creates a digital
twin, there's going to be this rub now.

Digital twins aren't brought up
as clinical trial simulation in

the world of clinical trials, but
it is absolutely a digital twin.

Digital twins are a term
that a company is using.

The company's name is unlearn.

I, I, I, I don't have any
relationship to any to, to unlearn.

I, I, I've talked to them.

I think they're, they're, they're
quite brilliant and they've, they've

created something and it, it may
be even some level of proprietary

where they label things as a digital
twin, and that's been making its way.

But this whole concept is
also really a quite powerful

thing within clinical trials.

The use of it in clinical trials
has usually been about a patient.

Can we create a digital twin of a
patient, not to a biologic level of

what are the level of cholesterol,
and then giving them a statin?

What happens biologically, but one
where we create a digital twin of

what their endpoint would be in
a trial or endpoints of interest.

If they were on a control, if they
behave in a way that we have previous

data, that we can create a potential
digital twin of that patient.

So, uh, we'll use as an example, a
couple examples, and I think unlearn

talks about Alzheimer's and a LS.

And so we have a patient that
comes in in a clinical trial.

They have a baseline CDR sum of boxes.

They have a baseline amyloid level, they
have baseline MMSE level, and we're gonna

run a trial where we're going to look
at 18 month outcomes of that patient.

And we're interested in what is
that patient's CDR sum of boxes.

This is a clinical dimensional.

Clinical dementia rating scale endpoint
commonly used in Alzheimer's trials.

And we would like to understand what is
going to happen to that patient if they

were given a placebo 18 months from now.

What is their CDR sum
of boxes going to be.

Incredibly powerful thing.

Uh, if, if we can do this now.

Think about why we run a clinical trial.

We run clinical trials where we give a
treatment to a patient, and what we really

want to know is the counterfactual of
what would've happened to that patient

Jane Doe if we didn't give them the
treatment in a, in an Alzheimer's trial.

Their CDR sum of boxes goes
from one to four in the trial.

What would've happened if we
didn't give them the treatment?

Would it have gone one to seven,
higher values are worse on that input?

Would they have been worse?

Would they have been better?

Would they have been exactly the same?

That's why we run a trial.

It's why we randomized patients
in a trial to no treatment.

So that we see some patients under the
counterfactual, uh, they're, they're

not actually the counterfactual, they're
the factual of giving them a placebo.

We, we observe what happens under
placebo and what happens under treatment.

Imagine the power that we could
give 500 patients a treatment.

And we could just say, what is the digital
twin of that patient from 18 months?

And we analyzed the trial.

I'll talk a little bit more about
how, how, how would you do that?

But we analyzed the trial.

So comparing to that, our trials
can be less than half the size

actually, because there's no,
there's no variability to that.

Maybe we need to characterize that.

But imagine if you could say, Hey Alexa,
what would've happened to Jane Doe?

Without this experimental
treatment, incredibly powerful.

It's why we randomize, and
it's why we do clinical trials.

It's why we do blinding.

It's why we do all of the scientific
aspects of these trials, and it's why

they're quite large, quite expensive,
is to get that honest answer to it Now.

Let's revisit a little bit of what
Unlearned does, because I think it's

important because they have come out
with an announcement and you can go

look at unlearn and you can look at
their news where the, the, the title

of their press release is that the
European Medicines Agency qualifies

unlearned AI powered method for running
smaller, faster clinical trials.

So the immediate thing is, oh wow.

We're doing this, what, what, what Scott
said about, uh, uh, creating a digital

twin and we're comparing it to the
digital twin and EMA has qualified this,

so I think you have to take a step back.

So what, what, what have they qualified?

Uh, of course there's this comment of AI
powered, we'll, we'll come back to that.

This AI powered method for running
smaller, faster clinical trials.

So, and you can go read this on Unlearned,
but what Unlearned does is they have

data, they have historical data on
Alzheimer's or a LS or the disease.

You need it where they
create in AI based model.

And it says on their website that
they use deep neural networks.

So this is a little bit about trying
to get into the details statistically.

We understand a little bit about what
this means to predict or model digitally.

What would happen to a
patient 18 months from now?

So they create this neural network model.

They call it an AI model, but
it's a, it's a neural network, and

it's not just a neural network.

It's a deep neural network
to do this forecast.

Their forecast is not a single patient
forecast with variability to it.

It's essentially a model mean prediction.

It's a model average prediction
of a neural network based on their

data of what that patient would do
with a LS, with Alzheimer's, with

cardiovascular disease, with weight loss,
whatever it is they're trying to do.

That's what it is.

It's a single model prediction.

Now, what has EMA qualified,
it's not that that.

Digital twin is a comparison
to a patient on treatment.

They use it in what they call pro cova.

It's a covariate adjustment.

Now, in in modeling, you've
probably heard of an ancova where

we use a baseline covariate.

Baseline CDR sum of boxes to
adjust for the 18 month value.

That's an ancova.

So this is a pro cova, and this is
that you use the predicted value

of 18 months for that digital twin.

You put that in as a covariate
to the model, and all of that

is based on baseline things.

Very legitimate thing to do.

And that's your covariate.

So this deep neural network value based on
the historical data is your covariate and

you put it in for treatment and control.

It's, it's, think of it as a value
of p this prediction for every

patient, they, they put it in
the, the, this AI powered thing.

And what comes out of it is
an 18 month value of 3.7,

and that goes in as a
covariate in your model.

And this creates smaller,
more efficient trials.

So what, what are the benefits here?

Now what?

What's an alternative to this?

An alternative to that is that you put
in the baseline value, you put in their

amyloid value, you put in their MMSE,
you put in characteristics of that

patient as covariates in the model and.

The, the trial fits those covariates
based on the patients in the trial

in both arms and, and really that
covariate adjustment is providing a

prediction of that patient at 18 months.

That prediction balances out the
two treatment arms in the scenario.

It allows us to reduce the variability
of this outcome at 18 months.

If we don't do any covariate adjustment,
there's more variability, but by Covariate

adjustment, we reduce that variability
substantially in many circumstances.

Baseline CDR, sum of Boxes,
amyloid value, MMSE, Addis Cog

plugged in is going to reduce the
variability of the 18 month outcome.

Incredibly important.

So now let's compare those two things.

One trial uses baseline covariates
and does not use unlearned, uh uh, EMA

qualified thing, just baseline covariates.

It's doing this adjustment.

There's gonna be a certain variability
to the 18 month outcome after adjustment.

The other trial uses this pro
cova digital twin adjustment.

Yeah, there's gonna be a certain
amount of 18 month variability.

The role of both of these are identical.

That's all these this is trying to do.

So that AI method has to be better than
the covariate adjustments in our model,

and I doubt it in most circumstances.

I highly doubt in most scenarios that
we have that there might be some.

And so we'll talk a little bit
about when that might be of value.

So two things.

First of all, if I have the data
that this unlearned machine has,

I, we, we, we can do just as well.

You, you could do, you
could do just as well.

Now, I'm not saying barrier, you could
do just as well to use that as covariate.

Figure out the right way
to model a covariate.

And, uh, you know, does this deep neural
network do adjustments or predictions

of that 18 month value better than
linear covariates or, or might,

might even be quadratic covariates.

You could learn based on that
data the right way to do this.

I, I don't think so.

I, I, it's, I think it's a lot of Blu.

And I, it, it's, sure.

I think it's very reasonable if you do it.

I, I, I've got no qualms to it, and I
think it's a very reasonable thing to do.

I just have my doubts as to
whether this is actually better.

I, I, I, I doubt it.

What, where's this scenario?

It might be better.

Let, let's be reasonable and
think about where's this scenario

where this might be better.

Suppose they have a couple hundred
patients in a rare disease that

you don't have in a scenario.

And you are not sure how to do
covariate adjustment in it, and

they can provide this prediction.

I think that could be of value, where
you don't have the data, you don't know

how to do this covariate adjustment.

This could be a very reasonable
thing to do, uh, in that scenario.

Now, in a scenario, let's, let's think
of A-L-S-A-L-S, by the way, has this

incredible resource, the ProAct database.

Is this incredible resource in a
LS that has natural history of pat

of, of a LS patients' behavior,
randomized placebos in there.

They might even have active
treatments in this, this data set.

I, I, I'm not sure of that, but
it's an incredible resource.

I'm sure when Unlearn does
their, their, their deep neural

networks, it's using that database.

So I, I don't think there's value there.

If you have the same data they
have in the scenario for a couple

reasons, I think you could figure
out the right covariate adjustment

times in since onset of disease.

And a LS is a good predictor.

Their their current A-L-S-F-R-S-I,
I I think you could do

really, really well in there.

And you don't need this AI powered thing.

I'm not even sure it's better, but
they don't tell you what the model is.

It's proprietary.

So they give you a value of 3.1,

but you don't know what the model is.

I, I don't think that's the way forward
here, and I think it's a bad way

forward with ai, um, in clinical trials.

I think we need that model written down.

I think we need that model to be
verified and, uh, uh, reprogrammed

and, and, and, oh yeah, we generate,
we get the same value to that.

I think if digital twins
go this route, that.

It's this super secret deep neural
network thing, and it gives you a

number that's bad and that's a bad
way forward for clinical trials.

Now, I, I don't wanna fault unlearn.

I, I, I think there's actually,
they may be doing things that are

hugely valuable and even if that's
out there, people are gonna want

to see it and use it in scenarios.

So what are the benefits of
it is a covert adjustment.

I think it's a lot of.

Not, not much, uh, in there,
in, in these scenarios.

Uh, so, but what, let's
think about the future.

Let's think about where this is going.

We are going to get really good at the
prediction of an individual patient level.

I mentioned this ProAct database, a LS.

In a clinical trial, when a patient
comes in, we can make really, really

good prediction of where that patient
would be under natural history.

Incredibly good.

We don't actually take advantage of
that much in, uh, clinical trials.

We're still randomizing.

I'm not saying that's the wrong answer,
but we're gonna get really, really good.

And I made, I, I was playing light
of it by saying, Hey Alexa, what

would happen to this patient?

We're gonna get really,
really good at that.

And moving forward, we
have to utilize that.

And we're gonna get really good at that
in a lot of different syndromes where

we get a snapshot of a patient at a
point in time and we're gonna be able to

have a really good prediction of that.

But let's think about that a little bit.

So, so going forward, I think this
whole digital twin technology, and we're

gonna start calling it different things
because this can be slightly confusing,

model based estimate of a patient.

Are going to become incredibly
powerful things moving forward.

It has to, now, we have to come to
grips with the causal aspects of this.

We have to come to grips with the, the
bias in who enters a clinical trial,

is it characterized by baseline?

And there's still gonna be huge
value in randomization, for example.

So imagine a clinical trial where you
do this and you randomize three to one.

Treatment to control, and you've got
the model based predictions that are

part of the primary analysis, but
you have real placebo patients to

identify, are we doing this well?

And they're randomized placebos.

That's a patient that you enroll and
you say, we have a prediction of that

patient if we do nothing to them,
and then we get the value of doing

nothing to them, which can verify
and validate within your experiment.

The whole notion of digital twins,
this becomes really, really powerful.

So this, this combination of treatment
controls continually verifying and

validating using less patients and,
and, and, and allowing a great deal of

more certainty to that, to understand
whether a treatment is working.

Is the future?

If it's not the future, we're
not doing this right now.

Where are we now?

This is actually being done.

It's not necessarily called this,
and that might be a good thing.

So in a number of scenarios, we do this.

Actually, propensity modeling is that.

Where you have a historical database
and you wait certain patients to be

a digital version of this patient,
we've just enrolled and then we wait.

Their outcomes, it's digital twins.

We do quite a bit at Barry, where we
model historical databases and we create

a model based estimate of a patient.

We do this in a LS.

Where we create forecasts of a
patient enrolled in a trial and

what they're going to look like.

We do this in FTD, we
do this in Alzheimer's.

We do this in a, in many diseases.

We've done it 50 times.

Now what's what's interesting
here is what do you do with this?

I want you to think about a scenario where
what we get in a trial is a distribution.

O and now I, I'm, I'm sure unlearn
has some level of prediction,

variability and all of that.

I don't think it's part
of the pro cova stuff.

Uh, they, they have to do this to
some level, but it's not going to be

that we simulate a single patient.

As a payer, it's wholly inadequate.

We need to simulate a million.

And that's actually a distribution
that, that's what we know

analytically as a distribution.

So coming back to this issue of clinical
trial simulation where we do a lot

of clinical trial simulation and we
have critics, I think Steven Sen has

been a critic publicly on Twitter
of simulation, um, uh, within that.

And he fits into this category of
this brilliant person that can do

analytical things, uh, in that.

We might be critics of, uh, a single
simulation of a single patient of,

I want a million, I want a Google
number of them, which is actually

just an analytical distribution for
every patient that walks in the door,

Jane Doe comes into a clinical trial.

We want a distribution of that patient.

And the title of this, this, uh, in the
interim is digital Googols um, which

is 10 to the hundred, uh, twins digital
Googols of this is just a distribution.

It's an analytical
characterization of that patient.

Now how do we, how do
we do this in a trial?

How do we do P values?

How do we do Bayesian
posterior probabilities?

Now, in many of our trials, that model
that we've created has a mechanistic

effect, like a rate of decline.

Or as a mechanistic behavior
that going into the trial, we can

actually include that as what is
the rate of decline for patients

on a treatment relative to control?

And we're estimating a theta
in a, in an oncology trial.

Imagine if we have the, the behavior,
uh, and, and the overall survival.

Distribution of a patient that
comes in the door that has a certain

characteristic with a certain, uh,
uh, uh, cancer, we could then create

this, this, this characterization,
this, this model-based estimate

of their time of overall survival.

And we can fit a hazard ratio based on the
individual patients that come into that.

So that's a little bit
of one step forward.

Of using the model based digital
virtualization of that patient

to characterize treatment effect,
which can be incredibly powerful,

especially under neurological
diseases, progressive diseases.

This can be incredibly powerful.

I'll throw out one where this,
this has to be the future is

Duchenne muscular dystrophy.

We've had a number of successful,
a number of controversial things.

We have great data sources.

They're actually not as
accessible as ProAct.

It's actually one of the issues,
but it's going to become accessible.

We're going to have these
phenomenal data sources.

We've just gotta get rid of the roadblocks
to that, and we're going to use these, and

we're gonna have this incredible virtual
characterization of a patient we know.

What's gonna happen to a 4-year-old
boy with Duchenne muscular

dystrophy that walks 240 meters?

We can, we can show virtually
what's gonna happen.

Can we characterize that
and put that in there?

So it's a little bit of a different trial.

The way forward of this is not, yeah.

So think about what we do now in trials.

We have a, a treatment and a placebo,
and we run things like T tests.

We run MMRM models.

We're not gonna do that.

With a single paired observation
of that, it it in a, in a trial.

Imagine you run a, a randomized
trial where we have a Google

number of patients on placebo.

That's what we have, and we're
gonna have to figure out the

statistical way to do that.

And we've done that.

You can do non-parametric tests
where the, the, the, the model

simulates a million observations.

Of that single patient under control.

And then you get the one observation
only in the physical system

of that patient on treatment.

But we've got no uncertainty in
this, this, this, uh, distribution.

So these, these are gonna be really cool.

And really cool trials going forward.

I think that model has to
be written down, verifiable.

Uh, people can go back and make
it better as we go forward.

As you get more and more data,
it has to be included into this.

Imagine this system what it is.

It really is a disease.

Alexa.

What is going to happen
under this patient?

And the characterization is
unbelievably promising within that.

And I, I think that's the way forward.

So in a way, I don't want you
to think, oh, Scott's, Scott's

critical of digital twins.

I, I might be the most excited person
out there for, for this virtualization

of a patient in ways that I think
models can do this really, really well.

And what it prevents is.

Putting a lot of patients on a control,
think about how many patients we put on

a control, and first of all, randomized
trials are so incredibly valuable.

We're seeing the reason for it within
current healthcare policy and all of this.

A complete lack of understanding
of the role of this, the need

of it, the conclusions that
come out of it compared to.

Misbehaving uses of, uh,
uh, other use of data.

So randomized trials are, are absolutely
unbelievable and they won't go away.

I think we'll continually randomize,
but this idea that five trials out there

all randomized, 50% of their patients
and we're enrolling patients under a

something and we know what's gonna happen
to them, we have an incredibly good idea.

It's bad.

It's incredibly inefficient as a
system, so we have to get there.

We have to move there faster.

Now it's only as good as the data and.

20 years ago, it just didn't exist.

We didn't really didn't
have these data sources.

It exists.

Some data sources existed within
pharma, some existed within the

NIH, some existed at other places.

You couldn't get your hands on it.

We still have this issue.

We still don't have great access
to data, but it's getting better.

Uh, medical systems,
data sources out there.

I, I think there's gonna be an explosion
of data, and it's going to be to the point

that we don't know what to do with it.

Let me give you another
circumstances of it.

So, I, I wear a whoop band.

It's like a, it's like a, a,
a, a watch, uh, a wearable that

tracks all kinds of things about.

Um, in, in terms of my heart rate,
my blood pressure, uh, my strain on

the day, how much I sleep, it's gonna
characterize things about my behavior,

how fast I walk, how big my gait is, how
much I type, all kinds of things that

are gonna characterize people in ways
that what are we gonna do with this data?

It's phenomenally powerful.

So we're gonna need really
good models to do this.

It might be neural networks that do this.

Now they need to be
accessible, uh, within that.

And if this goes, you know, they
need to be open source to some

extent if we're gonna use them in
these incredible, powerful ways.

So, oh,

virtualization.

Whether behind me is the
Pacific Ocean, you may not know.

Um, uh, in these trials, but
this whole idea of virtualization

as, as counterfactual in
trials is incredibly powerful.

Uh, I, I think that this is the future.

We're not, we are there
and this is being used.

External data's being used in trials
in really, really powerful ways.

It's, by the way, it's always been used.

We, we've, we've approved drugs
based on single arm trials.

And it's been compared to what we
think historically, what would've

happened to those patients?

An ORR rate that just seems that it
could never have happened based on

historical ways to treat patients.

We're getting better and better and better
at it's coming into rare diseases, it's

gonna be coming into common diseases.

It, and this notion as a system
of enrolling all these patients

on a control just seems, wow.

Uh, incredibly inefficient.

So thank you for joining
me here in the interim.

Until next time, I am Scott Berry,
your host, or at least you think I am.

View episode details


Creators and Guests

Scott Berry
Host
Scott Berry
President and a Senior Statistical Scientist at Berry Consultants, LLC

Subscribe

Listen to In the Interim... using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes · Next →