← Previous · All Episodes · Next →
FACTS 7.1 Release with Tom Parke Episode 3

FACTS 7.1 Release with Tom Parke

· 27:33

|

Judith: Welcome to Berry's In the
Interim podcast, where we explore the

cutting edge of innovative clinical
trial design for the pharmaceutical and

medical industries, and so much more.

Let's dive in.

Scott Berry: All right.

Uh, hello everybody.

Welcome to In the Interim, uh, our podcast
about all things statistics, science,

uh, in the world of clinical trials.

Uh, I'm Scott Barry.

Uh, here at Berry Consultants, a
biostatistician and I'm joined by

Tom Park, also of Berry Consultants,
our director of software.

Welcome Tom.

Tom Parke: Thanks very much, Scott.

Hi, everybody.

Scott Berry: Yeah.

Uh, and we are, this is
our third podcast thus far.

Um, and, uh, Tom is joining
us from Abington, UK.

Um, and this, so you see it's, it's
nice and bright in my background here.

If you're, you're seeing us live and.

Uh, it is night time, uh, in the UK,
so thanks, thanks for joining us.

So, we're, we're going to talk about
clinical trial simulation today.

Uh, we're going to touch
on a number of things.

We're going to talk about FACTS.

Uh, what is FACTS?

How do we use it?

Um, uh, and we have a
new release of FACTS.

So, let's start with
what, what is FACTS, Tom?

Yeah,

Tom Parke: that enables you to
simulate the clinical trial design.

Um, it's something we've been
working on for, uh, 16 years now.

Um, and in fact, I've been working
on clinical trial simulators for

quite a lot longer than that.

And, um, and when I worked on my first
one, I mean, it was, I just loved it.

It was, it was, it was like, It
turned kind of trial design and

statistics into a game, I'll be
honest, um, uh, in, in a good way.

Uh, you, you, you could
try different options.

You could try different things.

You said, does this make my trial better?

Have I won?

Have I improved, um, the power?

Have I, have I got my result more quickly?

Uh, you know, all these,
all these different things.

And, um, And, and I loved also the
way that the simulation of the trial,

um, And when you output everything
that had been simulated, you had kind

of these entrails to, uh, to dive
into and to see what had happened.

And, um, and you can see times when
maybe the design hadn't done the right

thing and, you know, what had misled it.

Why had it taken that track?

Um, I, and of course it, yep,
I just love these things.

And it enables you to.

understand and explore designs that
you can't calculate an answer to.

And, and, uh, and I love that
too, because it means you're

not constrained by what you do.

You're not, you can't, you don't say,
Oh, well, no, well, we couldn't, we

couldn't actually calculate that.

So we, we better not do it.

You say, well, we'll put that in and
we'll just simulate it enough times.

And we'll be able to estimate
what, what, what, what comes out.

And, uh, I love the, the,
the freedom that that brings.

Scott Berry: So I love, I
love the board game reference.

I know you and I are both, uh, strategy
board game people, uh, uh, in all of that.

And maybe when you hit the board
game ends and you didn't win,

it's, oh, I want to do this again.

Uh, I can beat this game.

Uh, and there's, I get
that flavor of this.

So interestingly, You and I, even before
Berry Consultants, Berry Consultants is

approaching its 25th anniversary this
summer, uh, even before Berry Consultants,

uh, this came up with the Aston Stroke
Trial, uh, in it we worked together

before you joined Berry Consultants on the
Lily Trulicity Trial, which was a highly

simulated, optimized, uh, Uh, clinical
trial and there have been, uh, Leucanomab

and Alzheimer's, a number of these.

So, FACTS was not our first attempt
at clinical trial simulators.

Uh, you're, you're, uh, uh, in this.

Tom Parke: No, no, we, we, our
first one was for the Acidin trial.

Um, we didn't write it.

Uh, that was, that was Don and, and
Peter Muller mainly wrote the simulator.

Um, and we took, we took a colleague
and I, Rob Nelson, um, took that

simulator and kind of generalized it
and gave it more of a user interface.

Um, and, and, um, We attempted to
interest people in it, um, particularly

at Pfizer, and the more, and in order
to do that, we kept adding features.

Oh, maybe this will interest people.

Maybe this will interest people.

And we got to a point where the whole
code base became unwieldy, because we

just added, added more and more features.

And every time we wanted to add a
new feature on top, we had to worry

about how it interacted with every
other feature we'd already added.

I put in there.

So it kind of, it kind
of stalled at one point.

It just got too big and had
too much stuff in it already to

really be able to go any further.

And then of course we worked,
um, we worked with Wyeth and

we, we built a second simulator,
the Adaptive Design Explorer.

And there we took very different,
we took the opposite approach.

That instead of trying to build a
simulator that, that had lots of options

and can do lots of things, um, we built
a framework into which the individual

simulators for a particular trial could be
added and the idea was it would accumulate

interesting designs and then maybe when
you wanted a new trial you could look at

these existing designs and you could try
and find one that was maybe appropriate.

Um, and yeah, the difficulty there
was, was, Which was, yeah, maybe it

wasn't flexible enough and, and, you
know, maybe there wasn't a design

that matched what you wanted to do.

Or maybe there was two designs and one
had one bit in you wanted and the other

had a different bit in you wanted.

And also, as the designs became more
different, um, creating a framework

which could handle all the different
types became more and more difficult.

And, um, and then we had a
chance to do it a third time.

And I think we got it right

Scott Berry: Yeah.

So, so, so let's take a step back
and talk about trial simulation.

So, uh, you, you have a, you're in an
interesting situation here that you're

creating software that are used by
people outside of Berry Consultants.

It's owned by Berry Consultants, but it's
outside, uh, but it's also used heavily

by people inside Berry Consultants and,
and me particularly, uh, a heavy user.

So, so what is this?

So, We were trying to
design a clinical trial.

Uh, the historical way of doing this
might be making a power calculation, uh,

within the very rudimentary pencil and
paper power calculations, uh, within it.

Uh, is, is okay for a fixed trial
design, where we're going to enroll

200 patients and in three years
we're going to look at the data.

But all of a sudden it became, let's think
about more complicated trial designs.

Let's do multi stage trial designs,
adaptive trial designs, changing

randomization probabilities for
the goal of a better trial design.

Now to build this thing
is much more complicated.

Does it work?

Well, you can't just go off and
throw a bunch of stuff together

and you run it and say, Oh, shoot.

That was a bad design.

So the goal is to really understand
that you're building a good design.

The only way to do that is
through trial simulation.

You can't do it analytically.

You can't go to a textbook and look
up what's the right design for this.

They're all custom.

So when we started working together
and we started building these trials

and everybody said you had to write
custom code for every trial design.

So we made reference to the Aston
trial, which I was not involved in.

You were, you were very much involved
in, but Peter Mueller and Don, it was

a year and a half of writing custom
code to simulate that trial design

made reference to the trulicity trial.

It was nine months of writing custom code.

You get it wrong.

Uh, all of a sudden somebody says,
Hey, can you add this feature?

And now somebody's got to go write
code to add that feature to test

it, make sure it's it's right.

And in the in the working cadence of
building this trial, it's awkward.

It's slow kind of thing.

So.

Can we create software that
allows you to do this quickly,

efficiently, explore different things?

Uh, and that, that, that's the whole
goal of this and that's what's trying to

get it right for facts, uh, within this.

So just setting the,
the, the tone of this.

Um, and so, FACTS is now a product.

It was our third attempt of
trying to get this simulation.

Um, uh, it's used extensively
by statisticians in the Berry

Consultants but outside of Berry
Consultants it's used for this.

So now, um, we've got an update to FACTS

Tom Parke: Yes, so, um, we
we've just released FACTS 7.1

uh and This every time we release FACTS
it it improves but this this one this

this This, um, this particular release
is probably a bigger improvement in

the underlying code quality than,
than, than, than previous releases.

Previous releases we, we've majored
on, on, on a big new feature or a big

new type of trial you could simulate.

Here we, um, I mean, we, we, we test
and validate the code very thoroughly.

But what we'd found was
there were two areas where.

What FACTS was trying to do
was more complicated than we'd

realized it when we coded it.

And both areas needed some careful
thought about, well, what is

it actually people want here?

And, and how, and what is
it FACTS is trying to do?

And the important thing is to make
FACTS relatively predictable, so

that when a user sort of chooses
an option and code some and, and,

and, and ask it to simulate it.

They understand what they're
asking FACTS to do for them.

And there were a couple of areas
where, um, well, one, there was

one area where that wasn't terribly
apparent, uh, which was in the open

enrollment dose escalation scheme.

So this is where we've, uh, we're
trying to encourage people to do

these phase one dose escalation,
escalation trials more efficiently.

And a great idea is, well, you
don't have to do it in cohorts.

Cohorts adds quite a lot
of time to the trial.

You, you, you recruit your, your cohort
size, typically three subjects, you treat

them with your selected neck dose of
the drug, and then you follow them up.

And it's not until you've got their
results that you can recruit another.

two,

Scott Berry: Yeah.

So you, so you're waiting for all three
say to finish before you treat a fourth.

Uh, and maybe as you go along.

Yep.

Tom Parke: Uh, but then
that raises the question.

Well, if you, if a fourth comes along
while these three haven't resulted,

what's the safe place to put them?

And, and, and it was providing
the rules for users to, to, to

choose what they were doing.

Um, we, we, we, they needed
really Clear thinking through it.

I think we've done that now and and
the dose the the the dose escalation

options You've got to set things.

I think is much more clear and and

Yeah, we're very happy with it, but
it did take quite a while And the

other major area where we did some
rework was in with this is really nice

feature called Bayesian predictive
probabilities Which is where, partway

through a trial, you can predict the
probability of success of that trial.

You look at the data you've got, Um,
and you, based on that, and based on

the distribution of your estimate,
you simulate forward a bunch of

times, and you say, well, how many
of those times were we successful?

Um, and, and that, that, it's a, it's
a very nice feature, because it's a

very nice, uh, Quantity to base your
decisions on if my probability of this

trial is going to be as successful
is less than say 5 percent Then yeah,

we're happy to give up at this point.

Whereas And then the other way it
can be used is, what is the predicted

probability of success if I stop
enrolling and follow everybody up?

And that's, that's also a nice way
of getting your trial the right size.

Where it got tricky, um, and where we
hadn't put enough effort in, was where

we're seeing, one of FACT's features
is you can simulate what we call a

stage design, which is a, where you're,
Simulating two phases and that might

be phase two and phase three or it
might be a phase two with some kind of

Big decision at some point as to drop
arms or add arms or something, but

you've got The two, the two, either
side of it is a little bit different.

And one of the features of
that is, um, you've got a lot

of options as to, um, control.

Well, how much, so the second phase,
the size of the second stage can vary

depending on how far the first stage went.

You're allowed to carry your, have an
overall budget and put more into the

second stage if first stage ended So
that affects your predictive probability.

Also, um, there are, there are a
lot of options for how much data

you include in the second stage.

Do we, do we not include
data from the first stage?

Do we include it all?

Do we only include it on
the arms that went through?

Do we pool all the data
from the first stage?

Do we Include incomplete subjects.

So people, if we, if we sort of transition
in a seamless way from phase one to

phase two, we'll have subjects who we've
recruited, but not maybe we can only.

So there's lots of different ways in
which data makes its way into stage two.

And of course, if you're going to
predict the success of phase two,

you've got to get all that right.

Scott Berry: Yep.

Tom Parke: And that needed
that needed a more thorough

going over than it had before.

Um, so we're very pleased with that.

The,

Scott Berry: So, so, so let's just,
so let's, so part of this is you're

adding tools now for the designer to go
through additional tools, strong tools,

and they're simulating, uh, multiple,
I mean, the go back to the phase one,

somebody could say, Oh, I want a three
plus three, or I want a bow and design or

that others want to explore the designs.

Which one works better?

Which one's gonna save me time?

Which one's gonna get
the right answers more?

So this, this is adding features.

One of the, the, the aspects of clinical
trial, simulation within the, the usual

cadence of trial design is I go off and
I simulate multiple designs, and now I

have to show the team how do they compare.

Fax is incredibly good at putting
in a design and simulating it

and it tells you what happened.

But now I have to show people how does
that, does that compare to other stuff?

You have added the ability to, and this
is a hard thing to do, uh, to do it.

It's not like there's a
single way or a single metric.

It might be time, it might be
probably getting the right dose.

It's great.

Finding the MTD, it can be a
huge, huge range of things.

You've also added the ability to
look at the designs, tie it to

R, uh, for people to look at and
something called our airship.

Yep.

Tom Parke: so we've got a number of ways
in which facts can interact with the

outside to give the user flexibility.

It can, it can import, uh, simulated
data and simulate the trials and that,

but also it's very good at, um, and very
thorough in the outputs it produces.

So you can.

Um, you can, you don't have to
rely on the built in graphs.

In fact, you can re
visualize stuff yourself.

You can even reprocess simulation
results to, to, to do something that

with the, the simulated data that FACS
doesn't otherwise allow you to do.

Um, and you're absolutely right.

One of the big things that you want to be
able to do that FACS isn't so quite Quite

so good at it yet is is comparing designs.

And so we've one of the hooks we've added
is is being able to call out to airship

where airship goes into our it's an R
package and you can create trellis plots

and the like where you can you can examine
different operating characteristics over

different dimensions, different scenarios,
different design types and so on.

Um, This is just a first step in this.

Um, at the moment you still have
to, uh, if you've got very different

designs, you still have to kind of
collect the results, stitch them

together yourself and go into Airship.

But we will be adding, uh, Over
the next year or two, a greater

flexibility in, within FACTS for doing
different designs at the same time.

Um, a more powerful way of
creating design variants.

Um,

so,

Scott Berry: So, so, so,
so, so, so that's great.

So, so we're facts and now allows the
user to do a comparison of these designs.

Um, so this is now fact 7.

1.

It has these new features to this.

This has just been released.

Um, interestingly, and you're in
a tough spot because on one hand,

You're developing this software.

It's used extensively by
Barry Consultants, and

we're pretty good at FACTS.

Uh, and, and when we sit down
to say, where do we go next?

Uh, I'm wanting more features.

Um, and I know, for example, we're
talking about ordinal endpoints

and, uh, other features like that.

At the same time, FACTS is
getting to be a big product.

It's, it's, there's a learning curve.

There's an ease of use.

There's aspects of trying to make
this just easier to use, uh, it for

for Barry consultants for others,
early users of facts and all of that.

So as as this is continually
moving, we're continually developing

facts where it sits now 7.

1.

What?

What's next?

Where are we going?

Tom Parke: So, so we've, we've
got a number of, of, of work

streams going on in parallel.

Um, and some will make it easier and
some are going to make it harder to use.

Um, and, and so as you, as you mentioned,
we've got the, the addition of, uh, an

ordinal endpoint, the ability to analyze
and simulate the, the trials with an

ordinal endpoint, both Bayesian analysis
and frequentist and that development

has been going on for some while
and should come out this year, 2025.

Um, Another area we are expecting to make
things more complicated is we're going

to give a lot more variety and control,
uh, in the quantities that you, um,

Use to make decisions on in the trial.

So at the moment basically you can only
make decisions on the quantities that

have come out of statistical analysis.

So p values, Bayesian posterior
probabilities, Bayesian

predictive probabilities.

We're going to add in, um, the ability
to include in the decision criteria,

uh, operating quantities of the trial.

How many arms are still in the trial?

How many subjects have I recruited?

How many subjects are on control?

How much time has passed?

So you'll be able to create different
decision criteria, which reflect,

uh, which can include that as a,
as, as a part of the expression.

You know, if we currently only
have three arms, then this is, this

is what I require to see to stop.

If there's only one arm in the trial,
this is what I require to see to stop.

And then we will also make the, uh,
uh, make you, the, the, expressions

you can create for those decisions,
you can have more complicated ones

with kind of and and or nesting.

Whereas currently you just say, well,
here are all the rules and you, they've

all got to be anded together, or
they've all got to be or'd together.

We're going to allow you to put brackets
in and have This, all these things

ended together, or these things ended
together, or these things ended together.

So that's all gonna make, um, again,
you've got a lot more flexibility,

but it might also be a little bit more
complicated to the, uh, to the end user.

So at the same time, we're then
gonna try and work on, on making

it simpler and easier to use.

Scott Berry: So, so it's, it's, it's,
it's, you're trying to do multiple things.

You're trying to make it more
complicated, bit more features.

Uh, so somebody like me who wants
more features, more flexibility,

but at the same time, uh, the.

ease of use, uh, within this,
within the daily practice.

I mean, somebody's got to be able to
simulate these designs and, and, and move

the ball forward on the trial design.

And so the ease of use of all of this.

Um, the other thing that
interestingly comes up is, is,

is, The ability to compare design.

So, for example, a promising zone
design, uh, a Goldilocks design, a group

sequential design, all of these, uh,
the ability to do these naturally fact

you alluded to this a little bit that
facts we took a, you know, Many years

ago, we made the decision that fax was
not going to be an aircraft carrier.

I think was the the analogy we use where
there are there are 14 designs on this and

you can pick up this particular design.

It's somewhat restrictive because
that design, you know, I use the

expression Dragolin design number seven.

This is Vlad's favorite design,
uh, because it's somewhat

restrictive and almost always we
want to make it more flexible.

So fax allows you to go in and create
custom designs that have never been

used before by putting multiple features
together, but then it's making it.

the ability for the user to know what,
what, how this all works together in all

of this, that the ease of use of this.

So we, we keep struggling with this.

What about wizards?

Uh, we've gone back and
forth about the wizards.

So what, what is a wizard
and are we going to wizards?

Tom Parke: So yes, w what we'll
call 'em, uh, is, is still, uh,

uh, I'm sure be she will still be

Scott Berry: is an internal
name that we're using for them.

Yep.

Tom Parke: So, um, yeah.

One of, one of, so, one of, one
of the facts is strengths is that

you, you can, you can easily start
with a very simple fixed design.

no adaptation, no interims, just two or
three arms compared against core, or just

two or three subgroups, or whatever it is.

Um, And you can simulate that,
get the results, and then you can,

with that same, within that same
design, you can then add features.

You can add interims, you can
add adaptation, you can change

the stopping rules, and so on.

You can change the analysis.

And you can do it all nice and
incrementally, and you can see what,

just, what effect each one has.

Um, But as you said, what that results,
what that means is that FACTS is

very much a, this toolbox approach.

You've got all these different options,
and what you don't have is the ability to

say, well, I just want a group sequential
with two interims, you know, and, and,

you know, these stopping boundaries.

And so that's, so, so FACTS can
simulate that, but you have to

kind of put everything in by hand.

So the proposal, the idea is we will
add, These wizard or designer, which

you say, well, I actually, what I want
is one of these kind of trials and

you'll get a, a, uh, a much simpler,
uh, set of screens with a much smaller

number of options you need to select.

And then from that facts will know
what will then fill out the full facts.

And if you don't touch it and
you just press simulate, you

get your group sequential or
your promising zone simulated.

Um, but equally you could, you could go
in and then start to tweak it manually

if that's what you wanted to do.

Scott Berry: So, so in some sense,
well, it's this big toolbox.

We can be an aircraft carrier at the same
time, uh, is, is, is the hope of this.

Tom Parke: Yes, you, you get kind of
pre assembled or, or, or, or, or, or

more, yeah, yeah, bigger, bigger chunks.

And, and I think the other advantage
of this is that these will make it very

easy for people to, people to simulate
kinds of trials they're used to and

they're used to thinking in terms of
and they'll that they'll the facts won't

only be this this slightly adventurous
tool for doing things off the beaten

track it will also be able to do beaten
track things and you can see whether

you can do better than that by cutting

Scott Berry: So, so, and this
ties in a recommendation.

So you can build, in fact, you
can build designs that have never

been used custom, strange things.

The goal is when you press simulate,
you can see, Ooh, this is a bad

design or this is a good design, uh,
whether, whether, which ties back to

the start of this, a nice board game.

I don't know if you've played it is
steampunk, which is your ability to

build machines that are all custom.

And you've got to win a race.

And it's got to do lots of things.

So, uh, it all ties into board games.

Okay.

So, uh, this has been great, Tom.

Um, there, there, the, the, uh, we, we
unfortunately have reached our time here.

This is fabulous.

We're going to come back as
new versions of facts come out.

Uh, updates to this, uh, hope to be
talking about wizards and ordinal end

points and Uh, in all of this, but in the
meantime, Steampunk's a really nice game.

You can try that.

Tom Parke: I will have to stop playing
Pandemic and start playing Steampunk.

Scott Berry: So, so tying it in.

So in the interim, you can go play
some Steampunk, uh, in all of that.

So, so appreciate everybody listening.

Tom, very much.

Thanks for joining.

Uh, and this has been in the interim.

Thanks.

Tom Parke: Thank you, Scott.

Thanks, everybody.

View episode details


Subscribe

Listen to In the Interim... using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes · Next →