A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

Loading...

From the course by Университет Джонса Хопкинса

Statistical Reasoning for Public Health 1: Estimation, Inference, & Interpretation

207 ratings

Университет Джонса Хопкинса

207 ratings

A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

From the lesson

Module 2C: Summarization and Measurement

This module consists of a single lecture set on time-to-event outcomes. Time-to-event data comes primarily from prospective cohort studies with subjects who haven to had the outcome of interest at their time of enrollment. These subjects are followed for a pre-established period of time until they either have there outcome, dropout during the active study period, or make it to the end of the study without having the outcome. The challenge with these data is that the time to the outcome is fully observed on some subjects, but not on those who do not have the outcome during their tenure in the study. Please see the posted learning objectives for each lecture set in this module for more details.

- John McGready, PhD, MSAssociate Scientist, Biostatistics

Bloomberg School of Public Health

The summary measures we've developed thus far for handling time-to-event data,

incidence rates, and incidence rates ratios,

are useful ways to quantify the outcome of interest.

However, there's a certain richness to time-to-event data because of

the dual dimensionality of both the binary component and the time component.

They can't fully be encapsulated with a summary statistic.

The Kaplan–Meier method, something developed back in

the 1950s is still the current industry standard for actually

creating two-dimensional graphical summary statistics of

the time-to-event experience in single groups or multiple groups.

And there's an interesting connection with Hopkins to this method.

One of the authors of the method Paul Meier,

was a faculty member in my department in the 1950s.

He went on to go to other prestigious academic institutions and actually

lived a full and productive research and personal life up through 2011,

which he died unfortunately, but at age 89.

So, welcome back.

We're going to continue our discussion of time-to-event data and ways to summarize it.

But now we're going to focus on a graphical technique,

a very commonly used technique called the Kaplan–Meier method.

So upon completion of this lecture section, hopefully,

you will be able to explain the purpose of a survival curve and its basic properties.

Interpret the Kaplan–Meier curve estimates of

survival curves with respect to summarizing time-to-event data for samples of data.

Explain how censored observations are used in the Kaplan–Meier estimation process.

Estimate a Kaplan–Meier by hand for small samples of data.

And just to get the sort of flavor of how this works,

I wouldn't expect you to do this by hand in real life,

especially with larger samples of data.

But for once or twice we'll have you do a small sample by

hand just to sort of appreciate the components of this.

And then use the Kaplan–Meier curve to give

rough visual-based estimates of time percentiles.

And then we'll look at a complimentary presentation of the same thing,

just in a slightly different scaling.

So the idea of what we're driving towards here is, incidence rates,

the things we've looked at in the previous sections are

appropriate numerical summary measures for time-to-event data.

And that these incorporate the two dimensions of the data,

time and the occurrence or non-occurrence of the event

into a single one-dimensional statistic.

However, time-to-event data is two-dimensional,

it will be nice to have a summary measure that breaks those out.

To visually capture the richness in such data,

a graphic would have to display both time,

and something about the occurrence of the event over time.

Common visual display for these type of data is what's called the survival curve.

And this can be estimated from sample data

using the aforementioned Kaplan–Meier approach.

Let's just talk about the survival curve,

the concept of a survival curve in general.

A survival curve is a summary statistic or

summary measure that estimates what we call the cumulative survival.

It's the proportion of our original population,

which we're going to estimate through a sample, remaining event free.

Surviving if you will,

not having had the event at least to a given time or beyond.

So on this curve,

we at any given time if we trace up to the curve and over to this axis here,

it will tell us what proportion of

the original population is still alive beyond this time or still has not had the event.

By definition, this curve at time zero has to start at 100%.

We're not interested in following people who've already had the event to have the event,

because we're looking at this point nonrecurring events,

can only occur once.

By definition, our entire population starts event free and the curve starts at one.

And it will stay at one until we see an event in our data,

and then it can drop and will continue to drop throughout the follow-up period.

The curve can only remain at

the same value as a function of time or decrease as time progresses.

The lowest this curve could go and it doesn't have to go,

but the lowest it could go is to zero.

Where everybody in the population has had

the event in the follow-up period or been censored prior to that time.

Of course, we again,

are not privy to population data.

Everything we do in statistics is using sub-sample information,

and then we'll ultimately extrapolate back to the population shortly.

But right now we're going to focus on estimating

the experience in the population using a sample of data.

So the estimated curve,

we'll call "s hat of t" the hat just indicating again that we have an estimate,

is based on all data from all subjects in the sample

both those who have the outcomes of interest where we have full information,

and those who are censored.

We'll demonstrate the estimation and procedure shortly, but first,

I just want to give some examples of the end result and how to interpret it.

These Kaplan–Meier curves.

Let's go back to

our seminal example in the primary biliary cirrhosis trials at the Mayo Clinic.

You may recall the overall incidence rate in

the follow-up period for these 312 people who were enrolled in the trial,

was there was a total of 125 deaths for a total of 1,715 years of follow-up,

across those who've died and those who were censored in the study.

Our numerical summary measure which may be a little hard to interpret at

face value is .073 deaths per person year of follow-up time.

And that is great in the sense that it gives

a summary measure of the risk that we can then compare to

other groups that we saw before we broke it out by

those who got the treatment and those who were on the placebo.

But it doesn't capture all the information about

what was going on with regards to death in the follow-up period.

Here is a summary measure that tries to actually visually get at those dimensions.

This is the Kaplan–Meier curve estimate for the used data.

So this curve shows the cumulative proportion, the estimated proportion,

of the original sample of the 312 subjects in this study who survived,

did not have the event.

Here the event was death,

did not die by the corresponding follow-up time on the horizontal axis.

So you can see this curve starts at 100%,

everybody was alive at time zero.

This is on this study time scale.

So everybody was alive upon their enrollment in the study at their time zero.

And then this curve,

notice the decrease is pretty consistently across the entire follow-up period.

When we get to the end of the follow-up period,

we have the people who remained the longest in the study,

you can see that the curve still is well above zero.

Maybe on the order of 30% indicating at the end of the study

roughly 30% of the original sample we estimate was still alive.

And you could see some others,

so if we wanted to look at what was the five year survival?

What proportion of people made it beyond five years after study enrollment?

We could go look at five here, trace it up.

These are all crude estimates because of I'm interpolating visually.

So this isn't quite right but we estimate from

this graphic at least that it was roughly 75%.

So 75% of these patients with

primary biliary cirrhosis survive

beyond five years after the start of the clinical study.

Let's look at another example,

this is the Nepali children data and

the infant mortality rate in the six months following birth.

So we'd summarized this with a rate in the six months post-birth of

644 deaths per 1.6 million plus days of follow-up time.

We came up with that number of .004 deaths per day.

Follow-up day and we said before we could re-scale this to per year,

per 500 years of follow-up etc.

But now let's look at visual of what was going on.

Remember these data were a large 10,295 live births.

So here's the Kaplan-Mier survival curve of time to death as

a function of enrollment in the study or follow-up times since start of the study,

since birth for these children.

So you can see this curve starts at 100%.

But the subtleties of this kind of hard to pick up because the scale here,

logically the proportions still alive goes from zero to 100% but you

can see that most of the action of the curve is relegated to the higher part.

So why don't we zoom in.

So let's zoom in.

What I've done here is represented the graph where I

restricted the y axis to the vertical axis to between 90% and 100%.

So we can see the richness in this curve a little bit better.

So you can see here this is very interesting and

consistent with what we know about infant mortality risk in other countries.

As well as there is a steep drop off in the first maybe 30 to 50 days.

And there continues to be a decline in the proportion

surviving but the trajectory is flatter.

And so that's very consistent with what we know about infant mortality in

other countries at the highest risk of mortality after birth is in the first month or so.

Let's see what we can see here.

For example, we could estimate the proportion of children who are

still alive beyond 100 days a little over three months after birth,

is on the order of 94% perhaps.

And then when we get down to the end of the follow-up period, 180 days,

that drops to another perhaps S(180).

Sorry for the sloppy writings on the order of 93%.

This is again, not perfect to scale but- that's a little bit harrowing, isn't it?

If within six months we estimate that we've lost 7% of this infant population.

Even though the majority of it happens early on,

there's still some action towards the end.

So this really kind of puts a context on that this rate

is that rate we saw before which was hard to interpret- numerically is high.

How do we estimate this Kaplan-Meier curve?

Well, it's generally done- In fact,

I'll say all of these except I may make you do it by hand once.

It's always done using the computer especially with samples with 10,000 observations.

I will though in a minute demonstrate the estimation process

with a small sample example just to give you some flavor for how this works.

And the method what it does is,

it uses the complete data in those who actually have the event in the follow-up period,

uses the complete data on the time at

which the event occurred and the fact that the event occurred.

But it also uses the incomplete data for censored observations because this gives

information about who is at risk to

have the event at a given time and the follow-up period.

So we actually use the information about censored folks until they're censored.

We don't know whether they go on to have the event or not.

But they do help us understand who is at risk wherever we are in the follow-up period.

Here's the interesting trivia note about the Kaplan- Meier method.

Kaplan and Meier were actually not collaborators and

were not working together on this problem.

They actually serendipitously both submitted manuscripts to the same journal at

the same time dealing with how to

handle the summarization of time-to-event data with censoring,

and the editor of the journal suggested they actually meet up and write a paper together.

Just some basics, I'll describe the curve,

and then will you show some numerical examples to sort of solidify it.

So the curve for any sample of data we can start at time zero,

and the estimate for any sample of time-to-event data,

the estimated proportion that survives or remains

event free beyond time zero is one as we discussed before.

And this curve is going to stay at one or 100% until we see our first event.

So if we don't see any events till four years after the start of study,

the estimated survival at one year,

two year, three year,

will still be 100%.

At our first event time and subsequent event times,

the way we're going to estimate what I call the survival curve, the cumulative survival.

The proportion of the original sample who is

still event free beyond the time we're looking at T,

is going to be a two part estimation process.

We'll introduce some notation here but then we'll fill it in with numbers to bolster it.

We're going to actually look at this N of T is the number at

risk of having the event at the time we're looking at.

So whatever the T is one that's at one.

If T is five, it's the number of risk and time five.

E is the number of people who have the event at that time.

Look at this ratio here.

Now I've marked it up with writing but we'll zoom in a minute.

Is this compares N_sub_T minus E of T is the number at risk minus the number of events.

So that's the total people who don't have

the event divided by the number who do are at risk.

So that's the proportion of people who could have had the event but didn't.

This is the proportion of people who were eligible to have the event time T but didn't,

who survived beyond it.

And we take that and multiply it for reasons we'll get into in a minute,

by the proportion of the original sample that

makes it through or survives the previous event time.

Let's just talk about these bits again.

I think this piece over here is the proportion of cumulative survival of

the original sample making it beyond

the previous event time T. So I'd say that's actually,

instead of beyond time T,

beyond previous event time before the one we're looking at.

Before then I'll generically say T, the one we're looking at.

And this notation will clear up when we put some numbers into it.

This other piece is among those who are still

around in our sample haven't dropped out or had the event prior to this time.

It's the proportion of those who survive beyond that time.

So what we do to get

the cumulative proportion who survive beyond the time we're looking at,

it was we take the proportion of those who are still

around that make it beyond the time we have,

and multiply it by the proportion of

the original sample that survives the previous event time extensively,

because that's the proportion of the original sample that would be

eligible to have the event at this next time.

Let me put some numbers a little more clear.

So let's look at a situation here where we have some data.

I've done all the work for you.

No, not all the work, but I've taken this data and I've ordered

it from lowest time to highest time.

Let's just describe these data.

Let's just arbitrarily, we're going to be generic here and say this scale is

in months and we're going to refer to a generic event.

It could be death, could be quitting smoking,

it could be completing a generalized equivalency diploma etc.

It could be graduating with a PhD,

anything where there's potential censoring.

We're going to have the event called the event.

I've ordered this data from

smallest to largest times but let's consider some of these times.

So times like this one that are set by themselves,

this two simply means that

this subject was in our study for two months and then had the event.

So they're a complete piece of data.

This next subject was in our study for

three months and then he or she was censored, they dropped out.

So all we know about them is that if they had the event,

it had to be more than three months after we started the study.

What we're going to do is remember we started the curve at S of zero was one,

and up until two months,

our curve continues to stay at one,

S of one month is one etc.

We're not going to see any action until we get to this first event.

Then let's track what we could have going on at this time.

At time two, the number at risk of having the event is our entire sample.

Everybody right before we get to time two is still around.

So you think of this as the number who are around right before the event we

see occurs and there's 12 people in this sample.

This is 12, and then at time two we observe one event.

One person in 12 has the event,

that means 11, 12s of the sample who could have had the event,

the persons who could have had the event did not.

92% of those who could have had the event did not.

And what proportion of the original sample was around right before time two?

Well, nobody had died or been censored prior to that,

so 100% of the sample was around.

92% of the 100% original sample who could have had the event did not and survive.

92% of our samples will be off beyond this time.

What are we going to do now?

Well, the next place we see some action so ostensibly is at time six.

That's when we see your next event.

So from an at risk tracking perspective we lost

this person at a time two because he or she had the event.

This person at time three did not have the event but they dropped out.

So when we come in if you think about coming in along

a timeline here right before we get to time six,

these two people are no longer at risk.

So at times six only 10 of the original 12 are still at risk.

And we actually see

two events which means eight of those who could have had the event at time 10 did not.

So eight of the 10 of those who could have had the event at time 10 did not, 80%.

But this doesn't describe the proportion of

the original sample who made it beyond time 10.

This only describes the proportion of those who were still event

free before time 10, those at risk.

So we need to merge that with our understanding about the proportion of

the original sample that was eligible to have the event at time 10.

So think about this,

we had estimated that the proportion of the original sample that made it

beyond the previous event prime time two was 92%.

Our current state of knowledge is that 92% of

the original sample was eligible to have the event time 10.

At which point 80% of those who were eligible made it beyond time 10.

So 80 percent of the 92% of the original sample who

were still eligible to have the event made it beyond time 10,

for accumulative proportion of 74%.

Let's move on to the next event time which is 10 months.

And you can see we've lost another person to censoring at seven months,

so by the time we get to right before 10 months,

there are seven people left at risk of having the event.

And of those seven only,

the one who has the event at 10 months has the event.

So the proportion of people who could have had

the event at 10 months and did so is one out of seven.

And complimentary the proportion of persons who could have had

the event but did not at time 10 were six out of seven.

That's the proportion of those who were still around and eligible at time 10.

And if we want to turn that into the estimated proportion

of the original cohort or sample that we started with,

we could take that times specific survival and multiply it by

the proportion or cumulative survival of

persons who had made it through the previous event time times six.

And so 74% of the original sample made it beyond or survived beyond time six.

And six out of seven of those who could have had the event at time 10 did not.

86% of those who could have had the event did not.

And we guesstimate that 74% of the original sample who survived

beyond times six was around and eligible to have the event at a time 10.

Putting this all together we'd say,

the cumulative proportion of the original sample we estimate to

have survived and remain event free be down time

10 is 86% of that 74% or cumulatively 64%.

So you could keep going through the final event time and you'd see that

the largest time in this sample of 32 months is in fact, an event time.

So the curve actually

drops to zero percent at the end of the follow-up period here,

if this last time or censoring time the curve would remain at some value above zero.

Coursera делает лучшее в мире образование доступным каждому, предлагая онлайн-курсы от ведущих университетов и организаций.