This is lesson 4.3.2 Monte Carlo simulation.

In the previous lesson,

we discussed the idea of sensitivity analysis,

and incorporating the impact of uncertainty into a technology assessment model.

Monte Carlo simulation is a methodology to

systematically assess the impact of uncertainty.

Monte Carlo simulation methodology is used to examine the impact of uncertainty.

The parameters of interest in

an analysis will always have a distribution as we've previously discussed.

For example, the average benefits or the mean benefits might be 30 symptom free days,

and that may be based on an or be from a normal distribution.

So there is also a variance which might be 16 days.

Monte Carlo methodology requires a random

draws from a sample based on the parameter distribution.

Because of this, it's important to use

a random number generator when performing a Monte Carlo analysis.

In other words, you may think that you can

randomly select observations from the distribution,

but it's important to use a random number generator to ensure that the draws are random.

Next let's consider how to actually perform the Monte Carlo analysis,

and it will become clear where the random draws come into play.

We know that the mean is equal to 30 days,

and the variance is equal to 16 days.

From this, we can use a computer program

to randomly draw from a normal distribution with these parameters,

a repeated set of parameter draws.

So, we randomly draw a parameter value from the known distribution.

In other words, we draw a value of the number of symptom

free days from a normal distribution with a mean of 30 and a variance of 16.

Then we perform the analysis or

the technology assessment using this randomly selected parameter.

In this example, let's assume that this time in the first draw we got 28 days.

We perform the analysis and save the output.

For example, we may calculate the ICER using

the randomly selected parameter value of 22 days or 28 days,

whatever happens to be drawn.

We save that output,

then we repeat the previous two steps.

So then we draw again. This time we might get five days.

We perform the analysis,

save the output, and we repeat that process again.

And repeat that process,

maybe a hundred, a thousand or a million times.

The idea is to repeat it numerous times

so that we could actually create a new distribution of outputs.

When we combine the output of each calculation

into a larger set of a thousand or a million outputs,

we have a distribution that has its own average and variance.

So in this case, the distribution of ICERs can be used to determine the mean

and variance of the cost effectiveness technology assessment result.

From here, we can then determine how much the output varies.

We may have a distribution with a very tight shape.

So even though the mean has a variance,

most of the observations will be within the same area,

or we could have a very wide distribution.

So, many of the observations fall far away from the mean.

This will help us in interpreting and

using the technology assessment results to make decisions.

Let's consider another example.

A new cancer screening blood tests for certain types of cancer has been developed.

Among these test, if the levels of the protein found in the blood are

higher than the levels in the 99 percentile of a healthy population,

more expensive secondary testing is necessary.

The 99 percentile protein level in the healthy population is 0.7.

That means that most of the time if someone has cancer,

they will be detected in the blood level if the protein level is above 0.7.

There are two brands of tests available currently,

and we're trying to assess the difference.

One test may be significantly cheaper than the other one,

but has more variation in the result.

In other words, the test is wrong more often.

They have the same average but different variances.

So, an average the tests are both correct,

but they differ in how much they can be wrong.

If a test result is positive,

the additional cost of testing may be $1500.

So it's important to ensure that the test is giving us

accurate results to prevent additional testing and the cost of that testing.

We can use Monte Carlo analysis to

incorporate this uncertainty into the technology assessment.

So we assume that the true protein level value is 0.7,

and we randomly draw a test parameter result from the known distribution for each test.

We calculate the expected value of treatment using that randomly drawn parameter,

and repeat the previous two steps 1000 times.

From these 1000 results,

we can calculate the average cost implications of each test.

In other words, we can determine

how much the variance in the cheaper test impacts the total cost to the patient,

or society, or government,

or whatever perspective we're performing this analysis from.