Hi. I wanna clude, con, [cough]. Hi. I wanna conclude our lectures on randomness

and random walks with a very simple applied model. Something I'm gonna call

the finite memory random walk model. And then I wanna give some summaries of some

of the things we've learned about thinking about randomness and random walks in

applying those ideas to the real world. So let's talk about this finite memory random

walk. Here's the idea. The idea is that our random walk model your value depended

on every single shock. All the way through, in a finite memory random walk,

your value only depends on the previous five shocks, the previous seven shocks.

Let me show you. So, the value of something at time T instead of being all

of the shocks, instead of starting at zero, and adding all that up to T just

includes the previous five periods. So you think of there being a window like this

and that window slides along over time. As time passes, you sort of just take the

last five things that have occurred. So, for example, if I want to look at the

value of something at time ten it's going to depend on X10 plus X9 plus X8 plus X7

plus X6. Then at time eleven, we're going to add in. The shock that we get at ten,

eleven, But then we're gonna take the shock that we had at ten, six and then

we're able and chop that off. Now what are these X6 X7 X8 X9's? They could be two

things in particular one is they could be employees, so imagine you bring in one new

employee each period and one employee retires well this would be how good is

your new employee and how good was your employee that you just let go [laugh].

Alternatively you can think of this as products the feature of a new product line the sales on

each of those product's lines could be a random value so you wanna get a value of

the firm it depends on the current constellation of products. Each year you

bring in [sound] you bring in a new product and you wipe out an old product.

So you can leave a car company that maybe has five models of cars maybe every five

years they've got a new model of one of those cars and dump the old model.

And this tells you in some sense, the value of their portfolio of cars. That's

the idea. What I want to do, is to use this to show you how we can take our

simple random mock model and make it slightly more realistic to capture

something in the real world. Well, let's use this to look at sports. Sports is a

good place for this model, because you think of a team as consisting of a set of

players. And every year the teams draft new players. So these are the new people

they draft, and these are the people that they retire. And we can think of, does

this capture what competition looks like among teams? So here's my model, really

simple. 28 teams, each team's value depends on five players, [inaudible]

basketball teams. And the champion is whichever team has the highest value at

time T. And I'll run this thing for 28 years I'm just gonna do this you can do

this on a spread sheet just run it for 28 years and we'll compute the value of each

team and crown one of them champion and let?s see how it works well what we wanna

do you're gonna go okay that's a model and it's gonna you know predict things that

are going on but we can also use it we can use it to predict other stuff that maybe

we wouldn't of expected so once we have this model we can predict things along a

lot of domains so one of the things we can predict is the number of champions. So,

how many champions would you expect the model to produce? What we see is that in

the 28 years the model produces sixteen, thirteen and sixteen different champions.

Now I've only circled these three and the reason why is this, this graph number four

Is the NBA. Graph number five, or number five, is the NFL. Graph number six is the

NHL and graph number seven is Major League Baseball. So, it's interesting, if you

look at the model and then you look at the real data. Over the last 28 years, you see

that they're remarkably close. Well, let's look at something else. The most

championships won. Again, here's our model And here's the NBA, here 's the NFL,

here's the NHL. And here's major league baseball, [sound] fairly hard to

distinguish which is the model and which is the real world, what if we look at the

longest streaks, well longest streaks the model always produced one of five but from

actual data the most we got was three, now that's not surprising really because if

you think of it in sports teams it's probably really hard to win four in a row

because other teams really start gunning for you and they. Event their lineups in

such a way that their all trying to beat you. So it's very hard for a sports team

to win five in a row. Whereas in my model that was more likely to happen, But

overall if you look at the three things, distinct champions, most championships and

longest streaks, we saw that. All three of those things sort of lined up. So we've

talked about why we modeled. [laugh] one of the reasons we model, especially

because these very simple models, firstly what if we think of team performances

being the sum of random decisions, what does that tell us? It tells us there

should be a whole bunch of different winners, lots of distinction, hence that

the most championships we'd expect somebody to win in a twenty year period is

like about between five and eight, and that's also true. And we shouldn't expect

any incredibly long streaks because of the regression of the mean phenomenon. And all

three of these things hold true in the data. So when constructing that simple

model, we're able to predict a whole bunch of stuff. So, that's again, one of the

real values of models. We construct them for one reason and then we can predict all

sorts of other stuff. Let me just give a quick summary of some of the stuff we've

learned by thinking about randomness. So, one is, is we shouldn't confuse luck. The

skill, there's some domains that are mostly luck. Other domains that are mostly

skilled. And if we get a lot of data we can figure out how much skill and how much

luck. Second, there is this paradox of skill [laugh], once we get the most

successful people together, then it could be whoever wins is luck because their

skill, the differences in their skill are so slight. Third, it always makes since to

do the math on streaks and clusters. You see somebody win fifteen times in a row or

if you think, boy this person's getting a hot hand. Or if you think, boy this looks

like a cluster instead of, you know, cancer cases or crime, you want to do the

math and find out is that something we'd expect to happen or really is it a streak

or a cluster. This regression to the mean, if things are a random log then we'd

expect to see all sorts of regressions to the mean. And so we shouldn't necessarily

bet that just because someone or some firm's been successful in the past, then

it may be successful in the future. We also know that might be true because of

the no free lunch theorem. That just because the, the heuristic worked in the

past, like rinsing our cottage cheese, it may not work in the future. [sound].

Finally, Wall Street Kind of has a random walk. It's not exactly right, but it's not

a bad model to think about Wall Street as having a random walk. And that's a good

thing to keep in mind, because you might have a friend say to you. Oh, hey, buy

this stock, it's going up, it makes a ton of sense. Well, you've got to think about

the fact that, if your friend knows that the value's gonna go up, other people

probably know the value's gonna go up, and the value may have already gone up. And so

therefore, you may be investing in something that's just a random walk. That

why, at least personally, I'm always very cautious about investing in individual

stocks. Because I estimate the market has probably captured all of the relevant

information. And I guess one more thing, I shouldn't have said, one more thing. We

then took that random mock model and we constructed a finite memory [inaudible]

mock model. And with that, we were able to see we could organize a bunch of data,

number of championships, number of winning streaks, distinct champions, those sorts

of things. And our model pro duced stuff that looked pretty close to the real data

with very little effort. And it actually makes a lot of sense, to think about the

value of a team, the value of a firm, as being something like a finite Memory

random walk. Again, the model helps us make sense of the world in an interesting

way. Okay, so that's a lot on randomness and random locks, but it's interesting.

It's also interesting to tie this back to some of the stuff we've already learned.

So let's think about it in the context of path dependence. If we have a random walk,

It's not path dependent, because what happens in this streak doesn't depend

in any way on what happened in the past. So now we think about some dynamic

sequence of events, we can ask ourselves, wow, is this a random block? Is it path

dependent? Is in a Markov process? Is a Markov process a random walk? No, it

isn't, because the thing is, in a Markov process, where you go now depends on this

current state you're in. So Markov process also depends on the state. So Markov

processes aren't random walks. So now I've got these three things, we've got path

dependence, we got Markov processes, we got random walks, we've got lots of ways,

lots of models in our heads, for thinking about how events unhold, unfold. And by

having lots of models in our heads, we're better able to make sense of the world.

All right. Thanks.