1:11

As a systematic reviewer or data analyst, the most difficult decision

to make is to decide which studies to combine in the meta analysis.

Justification for combining results are, while studies are estimating in whole or

in part, a common effect and studies addressing the same fundamental,

biological, clinical or a mechanistic question.

So here's one example.

If we're asking the question what is the affect of interferon therapy on

Hepatitis C, well there rarely you will have two identical studies.

The studies you're going to include in your systematic review usually differ

in one way or another.

And the size of the interferon therapy might be higher or

lower when the participants are older, more educated, or healthier than others.

Think about one example that maybe one trial is conducted in Asia.

For example, in China, where the Hepatitis C.

Is more prevalent.

In another studies conducted in North America,

do you want to combine those two studies in one meta-analysis,

because the underlying or the base line rates for Hepatitis C is different?

And there are different forms of interferon and different doses and

there are different subtypes of Hepatitis C.

You have to make the judgement whether the differences between the studies are small

enough that you feel comfortable enough to combine them or maybe they are too big

that you don't feel comfortable to throw them into the same analysis.

When you do a meta-analysis and this is a typical figure or

we call it forest plot, showing the results from meta-analysis.

You probably have seen this in previous lectures or in other courses here.

And in this particular analysis we have 5 studies.

Kunif 1997, so on and so forth.

And each study is represented by it's point estimate.

And the 95 confidence interval.

So the square in the middle for each study is the point estimate.

And you will notice the size of square differ between studies, so

the larger study will take a larger,

the square will be larger because it takes more weight in the meta analysis.

In the line of no effect or the now value is in the center, since we are using risk

ratio as the measure of association, and the risk ratio of one is the now value.

And then you can label your graph

favors the different intervention on the bottom of the figure.

So here the new thing you will notice is the little diamond in yellow

down on the bottom of the graph.

So that shows the meta-analytical results.

The center of the diamond lines where the meta-analytical result is.

And the width of it is proportional to the 95% confidence interval.

So here since all the 5 studies are showing more or

less very similar effect and the put estimate lines around the null value.

4:09

So, in the forest plot or in the meta analysis.

Each study summarized by an estimator effect or the result.

For example, the risk ratio.

In the overall measure of effect is a weighted average of the results

of the individual studies.

So the overall measure of effect is the yellow diamond

you saw from the previous slide.

And the weighted average reflect varying component of the trial.

And trials would desire more weight if it has more information and

more information mainly refers to the sample size.

More information leads to increased precision.

If you still remember from the previous figure.

We have the squares representing each individual study.

The larger square means that particular study is taking more weight

in the meta-analysis.

More formally, you can write the inverse variance weighted average.

Using this formula.

So let's say we have estimates from individual studies, and

now we're just combining them as a weighted average.

Here, YI refers to the intervention effect,

measures of association or measures of the effect, estimating the study.

And the WI is the weight given to the ice study.

And as we can see from the formula, the point estimate or where the center of

that diamond lies, is just the weighted average which sums up the estimate times

the weight in the the numerator divided by the sum of the weights.

In addition to getting the point estimate from the meta-analysis,

we also want to know the variance.

And we use the variance to construct 95% confidence interval.

And the variance for the weighted average equals to one over the sum of the weights.

And we will

show how you

can use this

formula to get the

meta-analytical

results in

the next slide.

Here's one

example.

Let's say from the first study,

if you look at the table, on the upper left corner.

So here we showed you the results from the first study.

Of those treated, 12 experienced the events and

53 did not experience the event.

And of those in the comparison group, 16 experienced events and

49 did not have the event.

So and we can use the formula odds ratio, which is the cross-product,

the ratio of the cross product to get the odds ratio from this particular study, and

this should not be something new to you because you have learned this in your and

Biostat that course is.

By plotting the numbers, we have an odds ratio of 0.69.

Now, we're going to get the YI, the YI

of the formula I showed you previously by taking the log of that odds ratio.

So the log of the show equals -.36.

And the variance for that YI, again, by plugging the formula you have seen

in other courses, especially the biostat courses, you can get the variance,

which equals .18.

And now the weight for the particular study if you remember from the formula I

showed you in the previous slides, WI equals 1 over the variance.

And you get the weight for this particular study which equals 5.4, so you follow

the same steps to get the log (OR1) ratio as well as the variance and weights for

each individual study that you're going to include in your meta analysis.

And now here are all the data you have.

Let's say for this particular meta analysis you have 6 studies.

And I just showed you how to get the odds ratio, log odds ratio,

variance in weight for the very first study.

And if you remember the data from the previous two by two table,

their reflected here in this table as well.

So you follow the same steps and you calculate the odds ratio,

log odds ratio variance in weight for each individual study and

the results are showing in the table on the bottom.

If we focus on the variance column of the table on the bottom,

you will see the variance for the first study is .19.

The variance for the second study is .29.

And comparing these two studies, which one is more precise?

Well, we know the smaller the variance, the more precise the study.

Which means that study will take more weight in your meta-analysis.

So let's move on to the next column on the same table.

You will see that the weight for the first study is 5.4, and the weight for

the second study is 3.45.

And if you look across all 6 studies,

you will notice that study four takes the largest weight, which is 17.

16.

So that's the most precise study among the 6 and

that study, while dominate to your meta analysis.

Which means that will take the largest of weight, relative to all the other studies.

If you sum up all the weight, that equals 42.25, and

then if you do the calculation times the weight for

each individual study with the log odds ratio and

sum them up, you get the summation of WI times YI which equals -30.59.

And the reason I'm emphasizing those

two numbers is because you can use those two numbers and plug them

in in the formula I showed you previously to get a better analytical result.

10:17

And here is the formula again.

So remember,

the meta analysis is just a voided average of individual study results.

And there are different ways to take the weight.

In the example I just showed you, we're using the fixed effect meta-analysis.

So the weight equals the inverse of the variance of the effect estimate.

You may have heard random effects meta-analysis.

Or other methods of weighting studies.

And the differences between these analysis, it's just a Wi,

the weight that given to the I study, is slightly different.

And for now, let's just focus on the fixed effect meta-analysis.

And by plugging the numbers from the example I showed you

where the summation of YI x WI = -30.59.

And the summation of WI = 42.25.

You get the log oz ratio for your meta analytical results, which equals -.72.

If we exponentiate that number we get the odds ratio which equals .48.

Remember, we also talk about how to get the variance for the pooled or ratio.

And the variance equals one over the summation of the weight and

that equals .024.

Square root of that number,

you get the standard error, and you can get the standard

error plug into the formula to get the 95% confidence interval for your odds ratio.

So these formulas, you probably have used them in your other other courses, and

this is really a quick refresher of the methods you have already known.

After we do all these analyses, and hopefully you don't have to do it by

hand and the software will do it for you, you'll get a nice forest plot.

So, here again we have the 6 studies.

Each study is represented by a square, and the 95% confidence interval.

Now we have the summary odds ratio, shown as the yellow diamond on the bottom, and

as I said before, study 4, the lang study, takes the largest of weight.

In the relative weight, that study takes is 41%.

As you can see,

that study shows the strongest defect comparing to all the other 5 studies, so

that's why the diamond lies between that point estimate and all the others.

And this is reflected by the formula you just used.

Which is weighted average of all the point estimates from each individual study.

There are different types of software you can use to do the meta analysis, and

they are represented slightly different in the published papers.

Here is one example from a published systematic review that compare

conventional occlusion verses pharmacologic penalization for amblyopia.

And the outcome is mean difference in visual acuity.

Here we have 3 studies, each is reflected by a dot, and 95%.

Well, we don't see the diamond down at the bottom.

And why is that?

Because the 3 studies are so heterogeneous, as shown by the statistical

measures of heterogeneity as well as by the authors' qualitative synthesis.

And the authors decided not to combine the 3 studies in a meta-analysis.

So a systematic review does not have a meta-analysis.

But you can still.

Use the method or the forest plot to show the individual study results.

This is very important,

because I got asked all the time by students a question that,

how many studies do I have to have in a systematic review to do a meta-analysis?

Well, the first answer is, you don't have to do a meta-analysis.

You only do a meta analysis when the studies are comparable,

when the studies are homogeneous,

when you feel comfortable that studies are estimating more or less a similar effect.

14:00

The results of a meta analysis and the estimate,

as well as the confidence interval as any other study must be interpreted.

In the context or clinically important effect size.

Statistically significant result might not be clinically important, but

result that is not statistically significant may still be compatible with

a clinically important effect.

So this is an important note that I would like to make.

Absence of evidence is not evidence of absence.

So be very careful when you're interpreting the results from

meta-analysis.

Key value or the statistic of significance is not the key.

But you have to interpret it in the context of the clinical question.

In this section, we have introduced formerly what is the meta-analysis and

as I showed you, formulas to get the meta-analytical results and

meta-analysis is simply a weighted average of the results from individual studies.

And they're typically representing a forest plot.

And we will move on to talk about, why do a meta-analysis.