· 39:08
Judith: Welcome to Berry's In the
Interim podcast, where we explore the
cutting edge of innovative clinical
trial design for the pharmaceutical and
medical industries, and so much more.
Let's dive in.
Scott Berry: All right.
Welcome everybody.
Welcome back to, in the Interim.
I am joined today by somebody
you're all familiar with, Dr.
Kurt Veley.
And he helps me, ho co-host some
of, in the interim broad, broadcast.
Today we are gonna talk
about, uh, time travel.
maybe not quite time travel,
but statistical time travel.
We are gonna talk about the time machine
and its relevance to platform trials.
we'll sneak in a little
sports, uh, all in this.
So welcome back to in the interim.
Kert Viele: Thank you very much.
Good to see you, Scott.
Scott Berry: Ready to see you.
Alright, so the topic of the time machine,
and we'll call it the time machine.
we, we have called it that and we'll
sort of describe why, we'll describe
the relevance of this topic and why
it amazingly has lots of people.
That get very flustered
or frustrated by it.
And, um, we'll, we'll talk about that.
So let's introduce the time machine.
Kert Viele: Well, so let's, um,
yeah.
let's let's go back in time speaking of
that too, cause you've been working
on, you've been working on this since
the nineties, something like that.
When all said and done.
Scott Berry: Yeah.
So let's go back in time.
And it's interesting how this, I
remember, and many of you have heard
some of the episodes we do about
sports and drug developers in sports.
And as a kid, my father Don, we were
talking about the old and the old, pub
fodder is the best term I have for this.
Sports, sports enthusiasts will argue
endlessly about the different talent level
of players at different eras in sports.
And if you watch the MLB All-Star
game, which was very recent here, they
did a really nice, Look back at Hank
Aaron hitting his 715th home run and
he beat Babe Ruth in the total nu the,
the, the, the career home run record.
And it was a big deal at the time.
Of course, they played at
very di very different times.
Even though Hank Aaron played
for an incredibly long time,
their careers never overlapped.
And so questions about
who's a better home run hit.
Her, and you could have those
questions about today, who's
a better home run hitter?
And Don said something to
me that stuck a little bit.
He said, if you take two players like
Babe Ruth and Hank Aaron, they never
played together, but they played
with players who played together.
So back in the nineties,
the, this I wanted to.
To put this into actual algorithms.
And so this fantastic thing in sports
where Babe Ruth never played with
the great home run hitters of today.
Aaron Judge to New York Yankee,
wonderful home run hitters.
They never played together,
but Babe Ruth's career.
Overlapped, Lou Gehrig's
career who overlapped.
Jimmy Fox's career overlapped
Ted Williams' career.
Ted Williams' career overlaps
Mickey Mantle overlaps.
Probably Hank Aaron at that point
who, who played for quite a long time.
So they, there's this bridging,
if you will, of Babe Ruth to Hank
Aaron, and even to Aaron Judge today.
So they never played at the same time.
But this overlap, you could
estimate the relative home run
ability of Babe Ruth to Jimmy Fox.
And Jimmy Fox.
To Ted Williams and Ted Williams.
To Mickey Mantle and Mickey Mantle.
To Hank Aaron.
You could estimate all these players
by simultaneously estimating.
The effects of age, I'm sorry, the effects
of era age is a different thing will come
to, but the effect of the era in which
they played and era matters in home runs.
There was a dead ball era that different
effect, pitchers varied over time.
ERA matters in which you played, but you
can always estimate the relative skill of
players that played at different times.
So I was involved in an effort with
Pat Larkey and Shane Reese, where
we wrote a paper that we did exactly
this, and we did it in baseball.
We did it in scoring in the NHL and golf.
And in many ways, golf is
probably the simplest of those
because there's only one goal.
In golf, there's really
only one stat that matters.
How many strokes did it
take to play 18 holes?
And you have exactly the same setup.
If you wanted to compare Tiger Woods
to Jack Nicholas or Scotty Scheffler,
to Jack Nicholas or Ben Hogan, Tom
Watson, you have this incredible
overlap where they played in the
same tournament at the same time.
In the NHL, we have the same thing where
they played in the game at the same time.
And you can estimate relative performance
of players bridging across this.
And then we estimated this relative
efficacy and we estimated the effects
of error, putting everybody by,
adjusting by earth, putting everybody on
exactly the same scale and estimating.
Comparing these apples to apples
on exactly the same scale.
So you could compare
Hank Aaron to Babe Ruth.
You could move Hank Aaron from 1973 to
1927 and say, what would he have done Now?
We called it a time machine
because we weren't interested in.
Suppose Hank Aaron
would've grown up in 1920.
Now he wouldn't have played in
the major leagues because he was
black and he couldn't have played.
suppose it was the nutrition of
the time and grew up at that.
We weren't interested in that.
We were interested in a time
machine moving Jack Nicholas
to 2025 as a 25-year-old.
Superstar, what would he do compared
to Scotty Scheffler now or Tiger Woods
in, in 2001, moving them in time.
So that was the paper and that was the
effort by the way, that was published in.
Journal of the American
Statistical Association.
It was actually picked as the,
applications and case studies paper to
present at the joint statistical meetings.
That paper, that effort.
So that was the introduction to
a statistical time machine, Kurt.
Kert Viele: so who did
you say was the best?
Scott Berry: Yeah.
Yeah.
which everybody wants to argue
about all of that and so Fantastic.
interestingly, in baseball we did home
runs and batting average, within that,
and it, we did this in the mid 1990s
and, if I remember, it's been a while.
I think Mark McGuire was the best
home run hitter, of all time in
that, the best golfer of all time at
that point, I believe was still Jack
Nicholas, but that was pre Tiger Woods.
Recent updates of that,
tiger Woods is the best.
within that, the NHL was fantastic in
terms of scoring, because there's amazing
things when gr, when Wayne Gretzky played.
He played in an era where the
goal scoring was much higher.
He was still the greatest scorer of all
time, but it's not nearly what I expected.
His numbers dwarf everybody's, but he
played in an era and so these are all
the fantastic things to talk about.
By the way, as we start to move this to
clinical trials and we'll, what we'll
figure out, why would anybody care
about a time machine in clinical trials?
Seems like such an odd thing.
It's much more challenging in
sports because players over that
time are at different ages an age.
The age of the participants
certainly matters.
within that, Jack Nicholas at 55 was not
Jack Nicholas at 30, so we had to model.
How players age within the different
sports, and it's different in
golf and baseball and hockey.
So there's a component of that which,
which adds to challenges of that.
and go to that paper and you can
certainly read about all of that.
but that's different
than, clinical trials.
Kert Viele: Alright, so let's
move to clinical trials.
I think the first time you see a
time machine is probably I spy ish,
where you're starting a platform.
It's 2010.
why don't you talk about the start of I
Spy and how a time machine got into that.
Scott Berry: Yeah, I SPY
two is a, platform trial in
neoadjuvant breast cancer.
And, just very briefly, it was doing
personalized medicine in largely
I'll refer to it as four, but it's
actually eight, four types of.
Of breast cancer stratified by hormone
receptor status and HER two status.
So you could be positive and
negative, positive, negative.
Additionally, there's MammaPrint positive
negative, so there's really eight.
let's talk about eight.
And what happened in the middle of
that trial is, and that different
therapies work differently in those
eight subtypes, and the standard of
care had changed in one of the subtypes.
And as that platform trial with different
arms at different times in that trial.
So one experimental arm might be there
in the first two years, and then another
one is there in year two and three.
Sorry, one and two.
And then the other one's
there in two and three.
Another one, three and four.
Four and five.
So multiple investigational
arms are in I Spy, and the
standard of care was changing.
And the beautiful thing about it
was the standard of care being
introduced was an arm that was in
as an experimental arm in I SPY two.
And so now we wanted to make comparisons
to a new standard of care in say,
year six of this platform trial
with, at this point maybe 15 arms.
And this is like players in.
We, we'll call it golf players in
golf that had been playing in this
common arena at the same time with
overlapping eras in the trial, and
now we have a new standard of care.
And Don and the design team
said, how are we going to do
this and make these estimates
also to the old standard of care?
It became infor important to also do
relative comparisons to the old standard
of care, but the new standard of care.
And this problem lays out exactly like.
Bridging eras in sports where different
experimental arms in a platform trial.
In this platform trial were there and we
were interested in making comparison to
previous arms and current arms, actually
all the arms in an apples to Apples way.
But they were in the trial at
different times, and we needed to
be able to make those comparisons.
Kert Viele: So you got a
couple differences here.
You mentioned age, you don't have
patient or you don't have players
getting older or younger, per se.
maybe they change a little bit 'cause
of standard of care evolving or whatever
that may be, but it's a lot less.
But you also have a lot
fewer arms in the trial.
So you know, you have thousands upon
thousands of players in these databases.
what do you think is the what?
What was the magic number
that said to do this?
Scott Berry: Yeah, that's right there.
The, at any time in I SPY two, there
may have been two or three arms
over there, and the nice thing is
in many of these subtypes of women,
there was a constant control arm.
It's almost call it Hank Aaron.
Hank Aaron played for almost 25 years,
so he had this really long career.
The control arm in that trial overlapped
a lot and there was always a connection.
So there was always e,
every particular point.
You could always connect
it from a particular arm.
So you need at least this
connection from one era to the next.
If you ever all of a sudden.
Stopped every arm before and
brought all new ones in after
it, you'd lose that connection.
So you need at least one connection
in them, and the more connection
you have, the more overlap of these
bridging, certainly your better ability
to understand each era in the trial.
Kert Viele: So this is, I've seen
a lot of papers consider this.
they bring it back to meta-analysis
where, you know, ConnectEd's
actually a phrase in meta-analysis.
How many arms.
Overlap between trials.
This is essentially the same idea.
It's just instead of individual
trials, you've got what happened in
this trial in 20 17, 20 16, 20 15?
Those what are what need to be connected.
Scott Berry: Yeah, that's right.
And so I spy too, just to set this
up, so we in, we instituted the
time machine and I think it was the
first use of the time
machine in a platform trial.
And that trial had run
over 10 years, 25 ish arms.
it's had a new evolution where actually
some endpoints have changed and all
of that, where we can talk about ways
in which we wouldn't want to do this,
but over the time, maybe 25 arms.
And it estimated the relative comparison
of these arms and the effects of ERA
incredibly well over those 10 years.
And it brought huge value to
that trial because we didn't
have to over enroll the control.
Because of this overlap, we could
enroll one out of five patients
on the control, and we had it,
it was estimated incredibly well.
So with less resources, we had
great information on the control.
And by the way.
If you gave over the course of ice py two,
what was the estimate of the era effects?
That's actually fascinating in the paper
of how does, how do the eras change home
run rates, batting average rates, scoring
rates going back and look at, in that
trial, it was very similar over the 10
years if you gave the same treatment.
Seven years apart to the same woman
with the same subtype of cancer,
you had very similar outcomes.
So,
there wasn't much time, there wasn't
much era effect in that trial.
Kert Viele: so Hank
Aaron's always 25 and 1930.
Looks like 1970.
Scott Berry: yep.
And in part it was the
incredible aspect that in.
In breast cancer, they had identified
these four really important,
subtypes that affected outcome
and affected treatment effects.
So they had it so well characterized
that there wasn't much effect.
You could imagine other diseases where
we don't really understand that very
well and it may affect some and others,
and you may get more era effects.
Kert Viele: And I think this
brings out, one of the key
assumptions in the time machine.
and it compares it to, using historical
borrowing, for example, from other trials.
If you are bringing in data
that's completely, there's
none of these bridges you need.
That external data to look,
in absolute magnitude equal
to what's currently enrolling.
The assumption here is really that
the relative differences are stable
over time, which would really be
in terms of the treatment effect.
The differences between arms are stable,
but generally, you talk about a dead
ball era and a live ball era and so on.
That's adjusted.
And here it would be essentially the same.
Drugs work well in 2013 and 2016.
It's not that drugs turn
on and off, so to speak.
Scott Berry: Yeah, so I
think let's get into that.
so I think people can
see now we've instituted.
The time machine in multiple platform
trials Now that we're using this, and
it's, by the way, it's a new thing in
clinical trial science with the advent
of platform trials, that we actually
have this incredible opportunity in the
same trial to have this ongoing set of
data where we have overlapping, treatment
arms where we can build this and provide
better estimates of treatment effect.
So in sports and in this setting, it makes
this assumption that we're estimating the
relative treatment effect between arms.
And we can call this in many
of the models we have, we can
call this an additive effect.
Is that fair to refer to it that way?
Now, this could be additive on a
logistic regression scale or on a time
to event scale, a quantitative scale.
but that the relative effect of two
treatments is constant in any era.
Now if you have that assumption,
this model works incredibly well and
there are multiple papers on this.
The Roig paper, I don't know if I'm
saying her name, Marta, I know Marta,
but Roig, that paper that you were
involved in that shows you get unbiased
estimates as long as you have this.
additive effect of treatment over there.
So while we're using controls from
different times, using this adjustment,
we get unbiased estimates mean
squared Error is tremendously better.
really nice paper showing this.
We have a paper, the Bayesian
time machines, Seville et all,
that, that walks through this.
So that, that's awesome.
Now what, how might this breakdown,
and in sports, I'll let you talk
about it in clinical trials, but
in sports, this works really well.
What might be an interaction,
and when people saw this paper,
everybody wants to argue that you
don't incorporate this or that.
But suppose Babe Ruth.
Yeah, in the 1920s is really good when
pitchers throw a lot of off speed pitches.
historically, pitchers then
threw nine full innings.
There weren't much in the way of
relievers, and so he's facing guys
that are throwing more off speed.
And if he faced guys that threw a lot
of fastballs, he might not be very good.
And we have an advent now of a lot of
fastballs, relievers coming in every
inning, throwing a lot of fastballs, and
maybe he wasn't a good fastball hitter.
So the relative difference from him to
Aaron Judge might be different in the
twenties, the 1920s than the twenties.
Twenties, a hundred year time lapse here.
Now you can create interactions like that
where the additive effect doesn't hold.
You can do that in golf with
equipment and all that, but largely,
for example, in golf, I don't
believe there's any interactions.
You can dream 'em up, but they're
not real and a great player relative.
if somebody was better in the 1950s,
they would be better now if you
move them and they had to play with
that equipment and all of that.
same largely in baseball.
I don't believe those interactions.
Now, what does it mean
in a platform trial?
Kurt?
Kert Viele: So for a clinical trial,
generally speaking, your treatment
effect, it depends on who you're
studying and what the disease looks like.
So the prototypical worry is COVID,
where you've got multiple different,
there are variants of the disease and
in addition, if you talk about 2021.
20, 2020.
You've got lots of severe people
in the hospital, maybe 20, 22.
There are more beds, more
people making the hospital that
otherwise would've been sent home.
you're treating different people
and the idea is that you could
have a drug that because you're
treating different people, it has a
different treatment effect in 2020.
It works well on severe
disease, but in 2022, you're not
treating severe disease anymore.
And the treatment effect now,
just like there's lots of stuff on
observational data, causal inference.
When we talk about that
assumption on the treatment
effect, you can adjust for covar.
It's after those adjustments
so you can mitigate this risk.
But that's the concern and it
certainly would break this down.
Scott Berry: Yeah, so infectious
disease is one scenario where.
it might even be a different disease.
So an antiviral that attacks COVID in 2020
might be different than an antiviral in
2023, but let's think about a LS Duchenne
Muscular Dystrophy, breast cancer, lung
cancer, Alzheimer's Disease O obesity, all
the, almost all of these other diseases.
This would be very rare that it,
treatment A is better in 2015 than
B, but it's worse in 2025 than B.
you almost have to try to be creative
to even create scenarios where that is.
Maybe there's a different background
therapy with an interaction to it,
but by the way, we, we have that
information, we can look at that
so that could be the potential.
A couple parts to this
is the important thing.
You and I are both shocked by the reaction
to this and we'll come back to that.
But you brought up external data.
So there's lots of talk in the world now
of real world evidence use of historical
information in trials where we might
run a clinical trial in a in breast
cancer, and we might want to bring in.
Information on previous controls or
in a trial in a LS in pre previous
controls, and this is somewhat all
the rage that people are trying to do
that external data, and you brought
that up, it's from a different.
Protocol patients meet different
inclusion exclusion criteria.
The data fidelity is different.
The data collection is different to this.
it was a different time.
All of those things are
different using external data.
In a platform trial.
These patients are being
collected in the same trial, the
same protocol, the same data.
They meet the same inclusion,
exclusion criteria.
They're in the same trial.
They were all randomized.
So we have this phenomenal, we
can call it historical data.
I think it demeans it.
In a way that is inappropriate.
It's so much, it's the greatest
historical data ever to have in
the same clinical trial controls.
The only difference is time in which
they're enrolled and we can model that.
Kert Viele: And it's a test.
It's a testable assumption because
we've got all of this over time.
We can look, do the treatment effects
look different era to era We also, you
mentioned that I-SPY changed the control
arm, but a lot of platforms don't.
You've got this very stable arm
that's in there as an anchor
to the model the whole time.
certainly these things can happen.
But they're, you're at far less risk
doing a time machine than any other kind
of borrowing or real world evidence.
Now, we should be upfront here, this is
not a randomized comparison, so there's
a reason that people have this reaction.
the magic of a randomized clinical
trial is you have at any given
time a control group that's
comparable to a treatment group.
We're talking about bringing
in other information, so that
is gonna break that assumption.
The question is just like
anything else we borrow data
for, do we get a better answer?
Do we treat patients better by using the
information as opposed to ignoring it?
Scott Berry: Yeah, I, I, we, we haven't
really explained one thing, by the way.
we should I, and maybe
I'll throw this to you.
We, how do you model time?
So this question now in a trial,
suppose you've got a platform trial
that runs for 15 years and you've
got all multiple overlapping arms.
And let's just say we model a year effect.
you could model what you
could model each year effect.
Independently, or you
could smooth over time.
So our Bayesian, our Bayesian
model is a little bit different.
So describe those two approaches.
Kert Viele: So e essentially,
as you were saying, we.
Bin time.
So we could do every year, every
six months, every three months.
statistically the right thing
to do is every time you change
allocation a new drug coming in
or not, that should be an error.
But a lot of times we approximate
this by a number of months and
just bin, set time intervals.
We often call a time categorical
model where we just do a fixed, it's
almost like we blocked on time and
we're setting a separate effect.
the time machine, it's got exactly
the same structure, but we smooth
those estimates and I think you
and I tend to sit on the on.
Slightly different
extremes of this continuum.
I tend to do more of the
categorical approach.
you tend to do more of the smoothing.
I think this is like
dose response modeling.
If you have lots of doses,
it makes sense to smooth.
If you have a few intervals, maybe
it doesn't help as much, but that's
the basic difference between it.
But basically you're blocking over time
and then the question is smooth or not.
Scott Berry: Yeah.
And so that particular problem
is largely a time series problem.
We're modeling this covariate over time
and how do you model that individually?
within that and the Bayesian time machine
is generally that, that it's smooths,
it tends to use a Bayesian smoothing
spline to do that smoothing, over time.
And I, and.
A lot of the trials I'm involved in maybe
has small intervals and it's, and there's
not huge sample size in each interval.
And hence there's benefits to the
smoothing where if you have big chunks
with lots of patients there, time
categorical is largely the same thing.
Kert Viele: Yeah.
Scott Berry: Okay.
So, um, there, it's interesting in
this setting, We can even talk about
frequentist things like unbiased
estimate of the relative comparison
of any particular arms and all of
this as long as we don't have this
interaction effect kind of thing.
Now.
people can present examples where
if there's an interaction, you run
into trouble with this estimate,
which is a fantastic thing for us
to think about because nobody's
ever talked about this before.
maybe an infectious disease,
that there's an interaction,
and if there's an interaction.
All of the stuff we've been doing
previously is hugely problematic.
Kert Viele: I actually, I tend to
be a troll when I'm reviewing papers
like this, and I, for what it's
worth, I'm often reviewer too, but I'm
usually a nice reviewer too, at least.
but I, I usually end up, there's a notion
that, oh, this doesn't work and this
could happen, and you have these examples.
But I often go back to, the protocols
that have been written and they
repeatedly refer to the treatment effect.
The treatment effect.
The treatment effect.
So it's something that, it, as
you said, it's not considered and.
The breakdown is actually pretty bad
because if you don't think a drug does the
same thing in 2022, as in 2023 as in 2024,
how do I approve it in 2025 and use it in
2026 and feel confident?
Obviously, there's a leap of faith there.
I think we all know that's problematic,
this isn't a new issue that people have
suddenly gone, oh, it's terrible here,
and it doesn't exist in the other place.
Scott Berry: Yeah, and I, if you
believe that this interaction
exists, it means that it's hard to
ever approve a drug going forward.
We need to constantly be
testing it in the year.
To say, does it, does this drug work?
Which the cases of this would be
frighteningly rare, but it's almost as
though a statisticians are really good
at pointing out this can happen and all
of a sudden, because we're now doing this
as maybe a new problem and pointing out.
So you and I have both been
really surprised by the pushback
of this and I'll, for last week
we got comments from the FDA.
By the way, I'm a huge fan of the
FDA, but they were questioning the
use of this in an oncology setting
in a phase two trial of the potential
use of the time machine for this.
and it's so weird to me that this is a
concern in that setting, of course, within
a therapeutic area where historically
we have this, where we create.
Objective performance number based
on historical data of a 15% response
rate that you're trying to beat.
And now we're trying to use this time
machine that there's this, almost this
reticence to use it, which means you
ignore a ton of data from the same trial.
And it just that, that has
surprised me greatly that
there's been a pushback to this.
Kert Viele: And I'm gonna, maybe I'll push
back a little bit on one thing you said.
I do think, some of these
interactions could happen.
You could imagine a treatment effect.
It's varying from 20 to 25%
over the course of the trial.
Some unmeasured covariate.
usually the magnitude, people worry
about type one error inflation in this.
if you're talking about small
differences, you get small type
one error inflation, it's 2.6,
2.7,
2.8.
A log rank test will do that.
doing a, basically most
asymptotic results do that.
We do this all the time, so I
accept the possibility, but I don't
accept that it's a catastrophe.
Scott Berry: Yeah.
Yeah.
Yep, yep.
and ignoring really valuable data, it
has huge impacts, much more than the 2.6.
Kert Viele: Type two era.
Type two eras matter two.
Scott Berry: Yeah.
And the other implication to it is you
collect a whole lot of new control data
that you probably don't need to collect
by using the data you have in the
trial and really smart particular ways
Kert Viele: And certainly I
wouldn't recommend ever eliminating
a control arm completely.
That's not what we're talking about doing.
Scott Berry: Yeah.
and that's the huge value of being
able to use time is having this overlap
and a control arm is a huge value to
that.
Now, I don't use the time machine.
In, in, in some study there are some
diseases, like I was involved in a
number of COD efforts and the remap
cap trial, for example, does not use.
This non concurrent controls,
and it was the disease setting.
It was thought not to do it.
Now, we used time adjustment.
Even though we don't use non concurrent
controls, which actually confuse some
people thinking we were using non
concurrent controls, in this setting, but
disease specific and there may be other
diseases where there's massive changes in
the disease and in the way it's treated
that we worry about exactly those things.
So this can very much be disease specific.
And then there are other diseases
we know largely there have been
very little differences over time.
Kert Viele: there are some cases
where you never know what happened.
I remember when we were, involved in
Los Angeles modeling of COVID, and there
were about three times during those.
18 months, whatever it was exactly
where clearly the amount of time people
spend in the hospital, it changed.
We don't know why we had to
address it in our modeling.
These are the kind of
things you can't anticipate.
Scott Berry: yeah.
Now, as it, it's also one of the
huge values of these platform trials
is actually this estimate of time.
it becomes a scientific
quantity of interest.
And I brought this up in the
sports example that looking at.
Major league baseball from 1920 to 2020
and the relative, what a player would've
done if they played at the different eras.
Those estimate of that is fantastic stuff.
for that.
And looking at the NHL for example,
in the mid 1980s, a player would've
scored many more points by playing
there than they would play.
Now it's just a different sort
of game and that fantastic thing.
The same disease learning happens
in these platform trials with these
estimates of time, and we have a
number of these trials showing this,
and as you say, we can explore.
Is there any evidence of
interaction within there, if
that's the thing that people are
concerned about in these trials?
Kert Viele: I think one thing we ought
to say here is I think people get
reasonably nervous about the thought
of, if you're talking about I spy
borrowing data from 2010 to, a month.
A drug that was enrolled in 2020.
So the notion of, what
does that really mean?
Are you pooling that so on?
And certainly this isn't
what that model does.
it depends on the overlap.
So when you're doing sports,
you've got thousands of.
Players going back in time.
in a clinical trial we don't have that.
We maybe have a couple dozen arms.
If you are looking at a drug in
2020, you give some weight to the
patients that were enrolled in 2019.
You give less weight to
2018, less weight to 2017.
And it depends on this overlap
at how many patients, we're not.
Pooling patients from 2010, they're
largely discounted because of the
lack of overlap in a clinical trial.
Scott Berry: Yeah, that's a beautiful
thing, and I think you did this in
the paper at all, where you look at.
In a time categorical, you can actually
calculate the contribution of a patient
depending on the overlap from earlier,
and it's this beautiful variance.
Covance Matrix gives you this beautiful
weighting of the patients and you
get this natural down weighting.
It's a beautiful thing.
Kert Viele: and the Healy trial
uses this to some extent because
there's a window, it doesn't
borrow back in time in infinitely.
It borrows back to a certain degree.
Scott Berry: Yeah.
And you're a fan of potentially even
just, algorithmically cutting a, cutting
a line and saying, we're gonna do this
for the last five years or something.
And that might be disease specific as
well, depending on what we think about how
the disease, how the treatment has moved,
within these trials.
Kert Viele: I think you could
just basically look and say,
this is only contributing 1%
of the effective sample size.
This isn't worth it.
You avoid problems with, people
having to publish and there's
live data that's still being used.
A lot of problems go away.
If you eventually realize this
really isn't helping us much, let's
put a line in the sand and cut it.
Scott Berry: Yep.
Yep, yep.
Alright, Kurt, so if you
could use this time machine.
And go back and tell Kurt Veley,
Carnegie Mellon as a graduate student,
you could go talk to yourself.
What would you tell yourself?
Kert Viele: Oh, that's a
terrible question to ask.
let's see.
So I remember, I remember when I was in
grad school, probably the least likely
thing that you would've ever thought I
would do was end up as a biostatistician.
'cause I was incredibly
computational, incredibly theoretical.
you'll remember we had this bet
on who could use the, you wanted
to use the fewest theorems in
your dissertation as possible.
So whereas I had a, I had
pretty much ruined that already.
So I think that, this notion of
the applied stuff is where you can
actually really affect the world.
Would be on that list.
Scott Berry: Yeah.
Awesome.
Awesome.
I did have a theorem in my
dissertation, by the way.
Kert Viele: Did you, you had one.
Scott Berry: Yeah, I think I had a couple
yeah, by, we need to do one of these
on my dissertation at some point.
which, which for those interested, it
was how to optimally play, hide and
seek, which you may find interesting.
I, by the way, if I could go back in time,
I might tell myself to do a different
dissertation topic, but I enjoyed it,
I loved it, and, we're in a good place.
Kert Viele: I, I have never
revisited my dissertation,
Scott Berry: Yeah, we're in a good place.
You know where we are.
Kurt, we are in the interim.
And till next time, thank you all
for joining us here in the interim.
Cheers.
Listen to In the Interim... using one of many popular podcasting apps or directories.