Hi. We've been studying Markov models. And we looked at that first model, where

there's students that were alert and bored. And then we looked at the more

realistic model, where, the more interesting model, countries being free,

partly free, or not free. And in each of those cases, what we saw is that the

process converged. Right? That it went to a nice equilibrium. What we want to do in

these lectures is study something called the Markov convergence theorem. Sounds

scary, it's a little bit scary. What it's going to tell us is that, provided a

few assumptions are met, and they're fairly mild assumptions, that Markov

processes converge to an equilibrium. So this is a powerful result, because it tells

us what's going to happen to a Markov process. Now remember this is a statistic

equilibrium, right? It's going to keep churning, but the probability that

you're in each state will stay fixed. What we want to do is understand, what are the

conditions that must hold for that to be true? So let's got back, right and think

of our first example, where we had alert students and bored students and we had

some p that was the probability that you were alert and what we could do is we

could say well .8 p plus .25 1-p has to equal p, that's an equilibrium, and we

found that if p was equal to five ninths, then that probability stayed the same.

If five-ninths of students were alert, four-ninths were bored, then we stayed in

those proportions. That's what we mean by an equilibrium. So what we wanna do, is,

we wanna ask, what has to be true of our Markov process for an equilibrium to

exist? You know, there's just four assumptions. The first one is, you've

gotta have a finite number of states. Well, that's the definition of a Markov

process, at least the ones we're considering. So that's always gonna be

satisfied. Second is, the transition probabilities have to be fixed. So by

that I mean that, from period to period, the probability to move from one state to

another doesn't change. Now, we'll talk in a minute about why that might not always be

true. But for the moment let's just assume that's the case. Third, and this is sort

of a big one: you can eventually get from any state to any other state. So, it may

not be that you can get from state A to state C right away, and maybe you have to

go through B. But as long as there's some way to get from state A to state C, that's

fine, that'll satisfy assumption three. And in the last assumption, the fourth

one, this is sort of a technicality, it's not a simple cycle. So if I wrote down a

process where you automatically go from A to B, and automatically go from B to A,

then the thing would, you know, churn. The thing is, it wouldn't really go to this

nice stochastic equilibrium necessarily. It could be all A's, then all B's,

then all A's, then all B's, then all A's, then all B's. So if you rule out simple

cycles, and just assume finite states, fixed probabilities, can get from any

state to any other, then you get an equilibrium. So this is the Markov

convergence theorem. Given A1 through A4, the Markov process converges to an

equilibrium. And it's unique, so you're gonna go where you're gonna go. No matter

whether you start with all bored or all alert, all free, all not free -- you're

gonna end up at an equilibrium, and it's gonna be the same one that's determined,

sometimes entirely, by those transition probabilities. So if I write some Markov

process like this, and I go ahead and solve for it, there's only one answer.

There's gonna be a unique answer for what that equibilirium is gonna be. So let's

think about what this means, because this is incredibly powerful. The first thing is

this initial state doesn't matter. Doesn't matter where I start. If I start with all

free, all not free, all alert, all bored, right? Anything that's a Markov process,

any Markov process, the initial state will not matter. History doesn't matter either.

It doesn't matter what happens along the way. If it's a Markov process, history doesn't

matter. What's gonna happen is gonna happen, we're gonna go to that

equilibrium. Now, history could depend on, you know, which students move from alert

to bored. With the long run percentages of alert and bored, the long percentages of

free and not free states is gonna be the same regardless of how history plays out.

Intervening that changed the state doesn't matter. So I go in, and change a state.

Like, if I go in and said, well, let's just make a country. Move it from free to

not free. Well, guess what? In the long run, that's gonna have no effect. Now

we've posed all of these as puzzles: The initial state doesn't matter? History

doesn't matter? Intervening to change the state doesn't matter? And, and they're

puzzles because that doesn't seem to make any sense because if you think about it,

we, history matters a lot. Initial conditions matter a lot. Interventions can

matter a lot. When you think about, you know, whether you're running a small

organization, a big business or a government, you think about, let's come in

and intervene here, so we're gonna make the world better. This Markov process

seems very deterministic. It's sorta saying: none of these make a

difference. It doesn't matter where you start out. What happens along the way

doesn't matter. And if you intervene it's not gonna have any effect. Let's see what

we mean by that. Suppose you can have, let's just think of the mechanics that work in a

relationship. A relationship could be happy. Another relationships could be tense. And

suppose we're modeling hundreds of relationships, a whole community of

people. And we're just keeping track of how many relationships are happy and how

many are tense. Let's suppose that these relationships have a Markov process. So

there's fixed probabilities of moving between happy and tense. We might say,

"Well, you know, there's a lot of tension in the community right now. So let's just,

you know, buy a whole bunch of people dinners." So let's just move 50 couples

from the tense state to the happy state by, like, giving them free dinners on the

town. Well if you do that, what's gonna happen? Well, for a very short period of

time, you'll make more people happy. But there's gonna be that movement back

towards tense. And the transition probabilities, if they stay fixed, are

gonna take you right back to the same equilibrium as before. So there's gonna be

no effect in the long run on the system. So, does this mean in general. So, I mean,

we wanna take these models seriously, but not too seriously. Does this mean that

interventions have no effect? That interventions are meaningless? And does it

mean that we shouldn't even redistribute stuff. If we redistribute happiness, or we

give these people meals, you know, to make them happy, that that has absolutely no

effect either? Does this mean we shouldn't do these things? Well, let's, let's be

careful. There's a number of reasons why, even though the Markov model tells us that

history doesn't matter, interventions don't matter, initial conditions matter,

don't matter, that it really could matter. And the first one is this. It could take a

long time to get to the equilibrium. So let's go back to those happy and tense

couples. It could be that if you make those couples happy, some of those tense

couples, that yeah, eventually you're going to go back to the old equilibrium.

But it could take twenty years. And if that takes twenty years, well those

intervening twenty years, there's a lot more happy couples. Or if we think about

only, maybe only 60 percent of countries will be free. But if we artificially make

too many free, we could have 30, 40, 50 years of a whole bunch of countries

remaining free that wouldn't have been free otherwise. So even if in the long run

we end up at the same place, it could be that in the intervening years, we still

get some sort of benefit. But that idea, that it just takes a lot more, a long time

to get there, maybe we can get a little boost in between, is still sort of, you

know, taking this somewhat negative view that any intervention we do can't matter.

But yet we've got this darned theorem, right? We've got this theorem that says,

finite number of states, fixed transition probabilities, can get from any state to

any other, then none of these things do matter. Well, let's look at these. Let's

look at them seriously and ask, which of these things maybe doesn't hold. Well,

the finite state thing, that's kinda hard to argue with. Because we could get, sort

of, bin reality in the different states. Remember, earlier we talked about

categories? Well, these states are categories, so we can think about which,

you know, categories do we create to make sense of the world. And having a finite

number doesn't seem like the big idea. This "can eventually get from any one state

to any other", well that one, you know, maybe there's cases where that's not true.

Maybe there's cases where you can go from one state to another. So that's what we

want to look at. But the one we really wanna focus on is this "fixed transition

probabilities", because it could be that when we move from one state to another,

when we move from tense to happy, when we move from not free to free, or as more

countries move from not free to free, that suddenly the transition probabilities in

the system change. There's some larger facts in these transition probabilities

change. So the thing we wanna focus on when we think about why history may

matter, why interventions may matter, is because those transition probabilities

may change over time, as the function of the state we're in. Now this doesn't mean

the Markov model's wrong. The Markov model's right. It's a theorem, it's always

true. But if we want history to matter, if we want interventions to matter, then

we've gotta focus on this. We've gotta focus on interventions or policies or

histories that can change those transition probabilities. Let me phrase this in a

slightly different way. If we think about changing the state of the process, moving from

tense to happy, it's just gonna be a temporary fact. But if you think about

changing the transition probabilities, then we can have a permanent fact. So to

think about what are useful interventions, they're gonna be interventions that change

the transition probability. If you think about moments in history, it could be

things like tipping points, that we've actually talked about before. Those are gonna be

moments in history that change the transition probabilities. So if we have a

tipping point, if we move from one likely history to another, what

must be going on is, those transition probabilities have to be changing. So what have we learned?

[laugh] We learned something very powerful. That if we have a finite set of states,

fixed transition probabilities, and they can get from any state to any other, then

history doesn't matter, inventions don't matter, initial conditions don't matter.

Now that's not to say that those things don't matter in the real world, they

probably do. But if they do, then one of those assumptions have to violated, either

states aren't finite, that's sort of hard one to disagree with, so it must be

that either we can't get from someplace to every place else, or that

those transition probabilities can change. Now the most likely one is that transition

probabilities can change, and interventions that really matter, interventions that tip,

histories that matter, are events that change those transition probabilities. So

what we see is not that everything's a Markov process. What we see is that this

Markov model helps us understand why are some results inevitable, because they

satisfy those assumptions, and why are some results not. Okay. Thank you.