Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

Loading...

From the course by Johns Hopkins University

Mathematical Biostatistics Boot Camp 2

34 ratings

Johns Hopkins University

34 ratings

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

From the lesson

Two Binomials

In this module we'll be covering some methods for looking at two binomials. This includes the odds ratio, relative risk and risk difference. We'll discussing mostly confidence intervals in this module and will develop the delta method, the tool used to create these confidence intervals. After you've watched the videos and tried the homework, take a crack at the quiz!

- Brian Caffo, PhDProfessor, Biostatistics

Bloomberg School of Public Health

Hi. My name is Brian Caffo and this is Mathamtical Biostatistics Boot Camp 2, lecture 6 on the Delta method.

So, in, in this video we're going to recap a little bit about the odds ratio, the relative risk, and these measures. And then I'm going to describe basically how you get the standard error and test statistics for these methods using this technique called the Delta method. And then I'll briefly derive the Delta method, but in a very general way, not, not in a In a highly technical way. Okay so recall we had a setting where x was binomial, n1 p1 y was binomial n2p2 and then we can have this two by two table here as well. And we defined the risk difference as the difference between the portions. Because this is just an average we can derive the standard error very quickly. We have the relative risk which is p1 over p2, and the associated estimator p1 hat over p2 hat. And we have the standard error of the Log relative risk which is given here. And it is not clear how we derived that yet. And then we have the odds ratio which is the odds associated with group one, p1 over 1 minus p1. Divided by the odds associated with group two, which is p2 over 1 minus p2. The associated estimate is p1 hat divided by 1 minus p1 hat, whole thing divided by p2 hat, divided by 1 minus p2 hat. And if you were to plug in the elements, the cells of the 2 by 2 table, into that estimate, you would see that it's the cross product ratio. The main diagonal n 1 1 times n 1 2, divided by the off-diagonal, n 1 2 and n 2 1. We also gave you the standard error for the log, odds of the estimated standard error for the log odds ratio estimate. And that is square root of 1 over the cell entries from the 2 by 2 table. This 1 over n1 1 plus n over n1 2 plus 1 over n2 1 plus 1 over n2 2. And then, in any of the cases, the confidence interval is the estimate plus or minus the normal quantile times the standard error of the estimate. Now a couple of notes are in order. First, in terms of the risk difference, we saw that maybe we could improve on this confidence interval by adding to the cells in a particular way.

In addition, for the two other estimates, the relative risk and the odds ratio. You have the issue that you are, the estimate here is the log relative risk and the log odds ratio. You add and subtract z1 minus alpha over 2 times the standard error of the log estimate. And then if you want an estimate for the relative risk or odds ratio [COUGH] respectively then you have to exponentiate, then points of the interval. Okay, so I hope you remember that as well. So this is our starting point and during this lecture we're going to show you how to get these standard error estimates. Or at least describe how they're obtained and then hopefully, you'll get a, a, a pretty good sense of, of how, of, of how it's done. And, and, and think very reasonably how it could apply far more broadly.

Okay, so the Delta method is a method that we can use to obtain standard errors for these instances, where things aren't just a simple difference of averages. Or a simple average itself, like in the risk difference case. So like, in the case of the log-relative risk, or the log-odds ratio.

So here's formally what the Delta method states. It says that if you have an estimator, theta hat. And its estimand theta divided by the estimated standard error. If that tends to normal one in distribution, then take a sufficiently smooth function F. F of theta minus f of theta over f' of theta hat times this standard error again also converges to a normal zero.

So in, in a sense, what this is saying is sort of the asympototic mean of F of theta hat is still F of theta. So if you want to estimate the, this function of theta, then the obvious estimate is F of theta hat. And this is saying that, well maybe if we're going to use F of theta hat as an estimate of F of theta, we'd like a standard error. It says Okay, we'll take the standard error that you would use if you weren't taking the function at standard error hat here of theta hat. And then simply multiply it times the, the derivative of f evaluated at theta half and that's what the Delta method says.

Coursera provides universal access to the world’s best education, partnering with top universities and organizations to offer courses online.