0:03

Now we're going to talk about saddle-point asymptotics.

Which is the process of refining saddle-point bounds to get the more

accurate estimates that are available to us.

so again, in the previous segment, I talked about the basic idea of using a

cauchy coefficient formula. To just find the saddle point, use a

circle of radius equal of the saddle point.

And just bounding the interger everywhere on the circle with it's value at the

highest point. but now what we're going to do to get

better at symptotics is just to focus on the place near the saddle point.

So as as we saw early on the integrand is really low for most of the circle so what

we want to do is find a good bound on on the tail or the part that's really low.

in focus on the part of the path that's near the saddle point.

This is like Laplasas' method that we talked about very early on in that

sypathic analysis. Where you focus on the center and bound

the tails and bring the tails back. And with that kind of calculation we can

get very accurate asymoptotic estimates. so that is we're going to look at the

little part of the circle that is close to the saddle point and then rest of it

we're going to show to be negligable. with different estimates.

that's the saddle-point method. And just it's, it's easy to see how it's

going to, going to be effective. Because we saw that the bounds were off

by square root of 2 pi n, which is essentially the part of the circle, it's

low, and we were assuming it to be high. That's all it is.

It's a different factor. so we're going to do a little more

analysis to see that, we can we don't need to do that, and we can get an

accurate estimate with Laplasas' method. Now this, as with many of the transfer

theorems that we consider in analytic combinatorics.

There's going to be technical conditions, that need to be satisfied, for us to

apply this method. and just for completeness in this lecture

I'm going to call that susceptibility. That's just a name for the technical

conditions that's, that are going to be needed for saddle point approximations

to, to work. in, in you, you know, all, all it is, is

really some kind of formal statement that this process is, is going to work.

So that you can you know, find the saddle-point and that's the root of the

saddle-point equation. And the curve can be split into two parts

such that the tail part is negligible. and that the part that's not negligible

is the saddle point that is, there's a quadratic approximation just like we saw

from just an elementary anaylsis of what a saddle point is.

so it doesn't have to be precisely a saddle point but if it's near that saddle

point you have that quadratic approximation.

that's as general of the conditions, as, as the conditions as we can make them.

3:34

and as with many of our transfer theorems that we considered earlier on for

singularity analysis and other approaches these technical conditions you know have,

have to be checked. so, if there is a multiple saddle-point,

then maybe there will be some other part of the condition to worry about, but I'm

not going to worry about that, intellectually.

and also tails have to be negligible in the first place, and then however this

estimate works. It has to be negligible, to bring the

pails back. And we'll look at a detailed, example

but, this kind of a technical detail is more appropriate to read about that, in

the book. So, what this is going to lead to is the

so-called saddle point transfer theorem. and that, all it says if you have a

contour integral. and, we write it as f of z equals e, to

the f of z. e to the little f of z just to simplify

the calculation, a little bit. it's a singularity, there's no problem

with doing that. So if that's susceptible to this sort of

approximation, then the, integral is approximated by f of z over that square

root of 2pi factor. It tells us exactly how to compute that

missing factor. this is a general technique for contour

integration, it's not just for asymptotics, that's about any contour

interval. [COUGH] it, gives a, a way to approximate

it. so, for, and the proof is very similar to

the proof that we did for the, saddle point bound.

but, so, that's the theorem, but what we want to use it for is a transfer method

for generating function. so it just translating that over to, our

use of, cauchy's theorem, where we're looking for a coefficient of z to the m

in a generating function. Then what we're going to do is just apply

this theorem with g of z with f of g equals g of z or z to the n plus 1.

So in that, and so that all that do is, is plugging into the general theorem, f

of z equals g of z over z to the n plus 1.

So our, our saddle point equation is just modified slightly and the main point is

that we evaluate our function at the saddle point

And then we have this corrective factor which is just square root of 2 pi times

the second derivative. so it's a plug and chug method of getting

coefficients of z to the n out of entire functions that have no singularities.

now there certainly is technical detail in proving suceptability and I'll try to

illustrate some of that in an example. But in this lecture what I really want to

focus in on is the idea that this kind of approximation is available.

I can't claim that this is as clean as many of the other transfer theorems that

we've looked at because checking those technical conditions can be more

challenging. so anyway, the summary is that for our

generating function g of z, and this is just doing the math backwards,

We're going to use the saddle point equation.

G prime is z over g is z equals m plus 1 over z.

So that's, theta is the solution to that. And then following that, that's the

approximation where little g is a log or big g minus m plus 1 log z.

that's a relative simple calculation for getting asymptotic estimates of

coefficients. So let's go back to our examples.

So estimating 1 over n factorial. it's a saddle point approximation.

Coefficient is z again and then e to the z.

So, g of z equals e to the z. And so, un then taking logs, our saddle

point equation is 1 minus n plus 1 over z equals 0.

so that's zeta equals n plus 1. and then the saddle point bound is just

plug in zeta second derivative us 1 over n plus 1.

so the approximation that we get is e to the n plus 1 over n plus 1 to the n plus

1 squared of 2 pi over n plus 1. and again that one goes to 1 over e and

just simple asymptotics. gives us exactly stirling's

approximation, e, e to the n over n to the n over square root of 2 pi n.

So the, this extra factor in the saddle point approximation gives us the extra

factor that we were missing before. so the choice that we always have when

using saddle point is you can do all the detailed calculations to check that the

interval is suceptible and so forth. or you always have the option of just

using the bound and sacrificing that factor which certainly in many

applications, people will choose as an option.

Now if it becomes important to get the factor, we can can get it, but it takes

some work. and again, checking susceptability, you

have to check that the tails are negligible, that the central

approximation is quadratic. And then we can complete the tails back

in, in a reasonable way. And that's really doing the calculation

for a particular function that we have. Uh, [COUGH] so, at this level, we don't

necessarily have the, the general scheme of, but, this is an important starting

point for, limiting distributions that is something, advanced that we don't study

in this course. so, let's just look at,

And, and again, this is the last lecture in a long course.

So I'm not going to do these calculations in detail.

other to, other than to just, exhibit 'em.

And this is the kind of calculation that's needed to show that it's

acceptable, for this problem. so, what we're doing is, trying to do a

contour integral for either the z or the z over the n plus 1.

or just putting it all in 1 function, z minus n plus 1 log z.

around this this circle, z equals ne to the i theta.

so first thing is to, just switch to polar coordinates z equal to n e to theta

and just plug that in there and most of it comes up.

E to the n over n comes out. and all that's left is interval 0 2pi e

to the n e to the i theta minus 1 minus i theta that's that's just simple

subsitution. so now what these saddle point method

requires is that we're able to split it into 2 contours.

1 for a little part close to the saddle point and then the rest for the tail

which is all the rest of the circle. and so those are the 2 contours and the

point is. For we have to be able to prove that for

the tales gets exponntially small. and that you can look at the text for

that proof, but when we're away from the saddle point, remember, in a 3D diagram

that's when the curve drops all the way down and so exponentially small.

And then in the part close to the so we're going to just negelect the tails

and work with the part close to the saddle point.

and another thing to notice is that if we can slightly shift the saddle point.

If we want, it simplifies the calculation quite a bit.

As long as we have that quadratic equation approximation near where we are.

13:04

And there's a small calculation to check that.

and then we can change variable and do the integral and that's the intergal that

we have. And then we want to restrict theta the

other way to complete the tail so that we can get an integral that we can evaluate.

and then that integral is a regular, normal integral, squared of 2pie n.

2pi over n is the final result. And so we had to bound alpha 1 way to

complete the tails, and bound it the other way to drop the O term.

and we're left with finding a value in between minus 1 half and minus 1third.

So we pick into the 2 5th and that's the complete derivation that, that shows that

it's susceptible to a saddle-point. And again these are integrate

calculations but it's kind of always the same and it's a matter of where that

bound is and that depends on the function.

the the place where the estimate near the saddle point is accurate and there's a

place where the tails are negligable and you just have to find that place.

and and then in the end we're getting Sterling's approximation back from this

problem. so that's a Illustration of the

difficulty of proving susceptibility. and unfortunately, that's still there for

many problems. Again this transfer theorem is not at all

as simple as others we've looked at. And you might well ask, well, you know,

you're talking about the difference between n to the 2 5th and n to the 1

half. That's about n to the 1 10th.

wasn't there something in early lectures about this not being relevant unless n is

so huge that we wouldn't care about it? Is that a problem here?

well actually it's not because in these cases we are talking about estimates that

are in the exponent so n doesn't need to be.

So if n is like a billion and these estimates are still something we are

going to care about and are going to matter.

'Cuz it's in the exponent. the other thing is that I'm doing a quick

illustration here. we can get a full asymptotic series to

any desired precision using these methods.

and also, now when we get the answer out we can validate them numerically which we

usually do for problems of this sort. and we're still pushing towards the goal

of having a general, general schema that will cover a whole family of classes.

it's hard to that we're as far along for saddle points as some methods.

actually, when you look at the, extension to limiting distributions.

we actually, actually are but that's an advanced topic beyond the scope of this

course. so let's look again at the transfer for

the central binomial case. so now, we're estimating coefficient is z

to the n, and 1 plus z to the 2n. so that's our, g of z.

16:28

so, if we take log of 2z minus m plus 1 log of z, and then define that to be our

function for the saddle point equation, first derivative is 2n over 1 plus z

minus n plus 1 over z. Second derivative then we get saddle

point equation is set that 1 to 0, so the saddle point is n plus 1 over n minus 1.

And that's what we saw before. and so the approximation says plug in n

plus 1 over n minus 1, into this formula. and we, and we get a pretty ugly

equation. Then we can go ahead and estimate but

remember, it, it's okay to shift the saddle point.

and you wanted to simplify the calculation and this 1 is really close to

1. and so rather than go through an

estimate, of those, let's just use 1. so if we say the saddle point is 1, in,

plug it in, to this equation, then we immediately get out for the n over square

root of pi n, as expected. Making that choice early on and also

simplifies the check of susceptibility and so forth, which is uh, [COUGH],

something that you want to do. You're going to slightly shift the saddle

point as long as that doesn't disturb the idea that you have a quadratic

approximation and that you have a negligable pale, it's not going to matter

in. We can more easily complete a full

derivation after a problem like this. So again, another approach is to just use

the saddle point bound and sacrifice the pi n factor.

But then if it's an application where it's important and you can see what that

factor is then you need to prove that you can go in and do it using Standard

techniques. I think that, that's the the message.

Now, so that's Saddle-Point Asymptotics.