Hi, in this set of lectures we're going to talk about something called Lyapunov

functions and what the Lyapunov functions are, is they're functions that really can

be thought of as mapping models into outcomes in the following way. So what we

can do, is we can take a model or take a system and we can ask ourselves, can I

come up with a Lyapunov function that describes that model or describes that

system. And if I can, then I know for sure that system goes to equilibrium. So what a

Lyapunov function is, is it's this tool, it's this incredibly powerful tool to help

us understand, at least for some systems, whether they go to equilibrium or not. Let

me explain what I mean a little bit more. Remember how we talked about, there's four

things a system can do. It can go equilibrium, it can cycle, it can be

random. Or it can be complex. Lyapunov functions, if we can construct them,

that's going to be one of the challenges. If we can come up with one, then we'll

know for sure that the system's going to go to equilibrium. If we can't construct

one, then maybe it goes to equilibrium, maybe it's random, maybe it's chaos, maybe

it's complex, we don't know. We can't really say anything. So the challenge here,

the really hard and fun part is coming up with Lyapunov functions. If you come up

with a Lyapunov function, then you know for sure, hey, this system's going to a

equilibrium. which is a nice thing to know. Not only that, we'll see in a minute that

you can see how fast it's going to equilibrium. So, how does it work? Here's

the ideal. Suppose you have a system and I've got something I care about here which

might be velocity on this axis. And suppose there's a minimal velocity which is zero,

which I'm representing by this big black region down here. Now, suppose that I say

the following property holds: I start with some positive velocity, and every period, if

the velocity changes it goes down. So it's gonna down to there and then it goes down to

there. Now it could be the velocity doesn't change, if the velocity doesn't

change then you're fixed, then you're in an equilibrium, but if the velocity does

change, it has to go down. Well if that's the case, if it changes it has to go down,

at some point it's going to hit this barrier down at the bottom, this zero

velocity point. And when it hits zero, it has to stop, so that's the idea. If

something, if it falls, if it moves, it has to fall. That's property one, it's got

to go down, if it moves it's got to fall, and there's a minimum, well those two

conditions are gonna mean that the system has to stop. With one little, we got to

pick up one little peculiar detail besides that, but that's basically the idea. If

the system is gonna move, it's got to fall and there's a min. So therefore at some

point, it's either gonna stop before the min, like it might fall, fall, fall and

then stop right here, or eventually it will get the thing at the bottom. That's

the idea. Now, how do economists do it? Economists do the opposite. They have

something where maybe this is happiness on this axis. And maybe people are making

trades. And you say, people trade, happiness goes up. So, I've got happiness

here. People trade, it goes up. People trade, it goes up. So any time people

trade, total happiness goes up, otherwise they wouldn't trade. So that means any

time the system moves, happiness is increasing. But, you've got this caveat

that there is a maximum happiness here, it can't go above this black bar. So what

does that mean, if anytime people trade it goes up. And at, and at some point, you're

gonna hit this bar, that means the process has to stop. And if it has to

stop, that means it's at an equilibrium, where there's no more trade.

Everybody's happy with what they've got. So there's these two in substance identical

ideas, right? One is from physics, that if things fall every period and there's a

min, the process has to stop. And then from economics, you have where things go

up every period, and there's a max. It has to stop. That's it, that's the theorem. I

know it sounds sort of frightening, right? Lyapunov, it sounds really scary, and I'm

sure when you looked at the syllabus, you thought, oh my gosh, Lyapunov functions!

This is gonna be hard. Maybe I'll skip this lecture. I thought about calling it

Dave functions, or Maria functions, because then it wouldn't sound so

frightening if I said, we're gonna study Maria functions, you know, so, ha, that's

probably gonna be pretty easy, or Dave functions. It's just that with these

Russian surnames, you sorta think, oh my goodness, this is frightening. It's not,

very, very easy. Here's the formal part. What we do is we say there's a Lyapunov

function if the following holds: First, I just have some function F, and

I'm gonna call this a Lyapunov function. And there's just three conditions. The

first one is, it has a maximum value. I'm gonna do the economist's version. In the

physics version, I'd say there's a minimum value. So there's a maximum value. Second

assumption, there is a k bigger than zero. So there's some number k bigger than zero,

such that, if x<u>t+1 isn't equal to x<u>t. So what that means is if-- F is</u></u>

gonna basically map the state now to x<u>t into x<u>t+1. If they're not equal,</u></u>

alright, so if the state in time t plus one is not equal to the time the state at time t

then F of x<u>t+1 is bigger than F of x<u>t+k. What does that mean in words,</u></u>

not in math? What it means is, if it increases, if it's not fixed, that the

point is not fixed, then it increases by at least k. Just by some fixed amount. It

doesn't always have to increase by exactly k, it can increase by more. But it's got

to be increased by at least k. If those things hold, so it's got a maximum, you're

always going to be increasing by at least some amount k, then at some point

the process has to stop. Because if it didn't stop, you would keep decreasing by k

and would go above our maximum. That's the theory. Now what does this assumption do?

What is this thing about, it's gotta be-- Before I just said, it has to be bigger; now I've got,

it's gotta be bigger by plus k. What's going on? Well this goes back to something

way back in philosophy called Zeno's paradox and Aristotle's treatment of this

is probably the one most of you learned in college, and that is: suppose I want to

leave this room, suppose I'm gonna leave this room right here and the first day I'm

standing right here. Here I am, da-ta-da, and the first day I go half way to the

door. Then before I get half, and then the next day go another half way. And then the

next day I go another half way, The next day another half way, The next day another

half way. I'd never actually leave the room. Well, if I don't assume, cause

what's happening here is I'm going up a half and then a quarter, and then an

eighth, and then a sixteenth. So if I made my steps smaller and smaller and smaller

and smaller and smaller and smaller and smaller, it could be that I continue to

increase, but I never actually get to the maximum. But if instead, I assume that

each step has to be at least 1/16. Well then after sixteen steps, I'm going to be

out of the room. So what Zeno's paradox is that you can basically keep making steps halfway, and

you'll never actually exit. And the paradox was that you could keep moving

towards the door but never actually get to the door. The way we get around that is, we

make this formal assumption that says there's some k such that, if you move, you

go up by at least k. So in this case I talked about it being one sixteenth, if

you go by, up by at least one sixteenth, then in sixteen steps you're out of the

room which, and, since you can't leave the room, that's a max, what's gonna happen is, the process has to

stop. So that's all there is to it. We often have function consistent is F, it's

got a maximum value. And then there's some, if it's the case that the process

moves over time, then in the next period, you've gone up by at least some amount k.

And since there's this max, you're going up by at least k each time. Eventually

you're gonna hit that max and the process has to stop. And there's a bonus we just

got as well, right? If each time I go up by 1/16th, then in sixteen steps, I'm

gonna have to stop. So you can also say how fast the process is going to stop, and

that's obviously not a very complicated calculation at all. Here's the tricky part

[laugh] about this, the hard part about this is constructing the function. So the

theory, the idea that there's a function, there's a max, we go up by k each time,

that's really straight forward. The really tricky part is going to be coming up with

a Lyapunov function, coming up with that function F. So what we're going to do in

this set of lectures is, we're gonna take some processes, things like arms

trading, trading within markets, people deciding where to shop, and we're gonna

show how, in some of these cases, it's really easy to construct Lyapunov

functions. In other cases, it's really hard to construct Lyapunov functions

and, I mean, we can't even construct Lyapunov functions. So we're just going

to explore how this framework, this Lyapunov function framework, can help us

make sense of some systems. Help us understand why some things become so

structured and so ordered so fast, and why other things still seem to be churning

around a little bit. So the outline of what we're going to do is, we're just going

to start out by first doing some simple examples, see how Lyapunov functions

work. Then we're going to move on and see some sort of interesting applications of

Lyapunov functions, maybe when they don't work. And then from there, we'll go on and

talk about processes that maybe we can't even decide whether Lyapunov

functions exist or not, some open problems of mathematics that involve trying to

figure out: does this thing go to an equilibrium or does thing continually

churn? And then we'll close it up by talking about how Lyapunov functions

differ from Markov processes. Remember, Markov processes also went to equilibrium.

We'll talk about how those equilibria are different from the equilibrium we're talking about

in these Lyapunov functions, and also how just the entire logic about how the system

goes to equilibrium is different in the two cases. Okay, so let's get started.