2:07

Clearly, this one is equal allocation. This one is actually a slight deviation

from equal allocation. If I plot this on a chart between the two

user's rate, one would be here,

another would be 45 degree line. Another would be like just a little above

it. Even though this one does not lie on the

45 degree line, almost all people will still prefer this,

and think that this is a better point, okay?

So clearly, when we say that a more fair point is actually a worse point that

doesn't give the word fair a very useful interpretation.

So if we say this one is more fair than this, that means we somehow have taken

into account some notion of efficiency, okay? We'll come back to this later in

today's lecture. And this also says that magnitude matters

and it is the absolute allocation, not just the relative proportion that makes

the notion of fairness useful. And this underlines something called the

difference principle in Rawl's theory of justice.

In a vast material part of the lecture, we'll be briefly going through this.

One of the most influential thinking in political philosophy in the last 20th

century on the subject of distributive justice or fairness of liberty and also

resource distribution. Now, of course, a tougher choice would

have been choosing between one, one and one, two let's say,

okay? But then, maybe magnitude matters,

okay? In fact, there's a interesting experiment that highlights why the

efficiency would matter and this would have been a tougher choice between these

two. And that's called the ultimatum the

ultimatum gain. Basically, there are two players in this

game, okay? And one player will be able to

issue the following. Since that here's a dollar, okay?

It just drops from the sky on our laps. And I can decide how to allocate this

dollar between the two of us. And I'll keep x, and then u, then have 1

- x. Now, this may remind you of the framework

of negotiation based on bargaining theory by Rubinstein back in lecture five,

okay? But in this case the choice is actually much sharper.

If I propose x and you get 1 - x, then you can only have two choices.

A, you accept it as is, or B, you reject it in which case the

game is over, okay? There's no further iterative

rounds. You will get nothing and I will get

nothing. So this was an experiment that carried

out in many different cultures and the demographics to see what x would people

pick, okay? And what x would the other party accept.

And it turns out that out of the many interesting implication, one is that the

magnitude matters. Whether this is dividing $1 or $1 billion

makes significant difference, okay? Alright. So, this is the first thinking

aloud of this fairness function, okay? Is equality fairness and probably

not. And if not, how do we understand the

impact of magnitude? How do we understand the tradeoff between

everybody much wealthier, except that some are more wealthy then others, versus

everybody start to, starve to death. and that will be very equal as people all

starve to death. Now, another way to think about this is

to think about the different contributions as well as different needs

by different people in this network, okay?

Whether it's a social economic network or a technological network.

For example, if this ISP glues together the rest of the internet,

okay? Maybe this is a very important node, a

very important link. Its contribution is significant,

okay? We may or may not appreciate how much

more contribution it is, but by some centrality measure for example, this

might be ten times more important than the others.

So, maybe this IP, ISP deserve higher allocation of resources.

And some people have more needs than others too,

okay? So, how do we think about that issue?

7:34

okay? So, rather than differentiating into say two students get A+, and then 98

students would then get somewhere between A and D,

okay? Including quite a bunch in D, and maybe even a few getting an F grade.

I'll say, well, let's just be kind and generous.

I will just give B grade to everyone. First of all, this is problematic because

it may not be providing the right incentives to the students to work hard.

And second, this is hardly a fair allocation of grades, as most people

agree, even those who are getting a bump from D to B, they will think that hey, I

just got an unfair B grade. It somehow does not reflect that grades

should be different, depending on the performance.

I shouldn't have asked A+ students to subsidize my grades.

And third, this problem assumes that I can assign as

many B grades as I want. But often in a system, in a network, you

have a total resource constraints. There's only that many B grades you can

assign with, while leaving those constraints.

8:49

Alright. So, while we appreciate that different

people should receive different amount of performance and resources,

we also think that suppose there are some lazier workers.

Less capable workers, less underperforming workers, okay?

The fact that they are lazier and underperforming doesn't mean that they

should just be left to starve to death. And that their families should starve to

death. At the same time, there are few people

will think that these people should deserve the exact same pay, or even

anything remotely to similar pay as the others,

okay? So, if I play basketball I probably

should be getting the same pay as Michael Jordan, even though Michael Jordan, some

people may view as getting a ridiculous, unreasonable, unjustified high pay. But,

who would say? Well, he's that much better than many of us.

So now the question is, most people would agree that, just because they don't

perform as much as the others, they shouldn't be left to death.

At the same time, they shouldn't be getting the same as everyone else.

Now the question is, how much then should they get?

Most people say, well, they should get at least the basics.

For example, a very basic amount of, you know, food and shelter.

Now, how much is basic amount of food and shelter?

Is an annual vacation to the Bahamas part of a basic package?

That means, we have to somehow quantify. Because arguing with English usually lead

to nothing as we do not even agree on the scale of measurement.

So, how can we quantify the notion of fairness?

Now, there are quite a few different dimensions.

For example, procedural fairness. Too often the word fairness has been used

as a disguise to take away rights and liberty.

And too often the output what is fair enough viewed by somebody comes from a

procedure that is not, okay?

So, in legal study in a place called science, people study procedural fairness

a lot. Now we, unfortunately, will not have time

to talk about that. Even though that, I guess, is way more

important than whatever we will be talking about,

okay? then there are studies in a lot of fields

looking at the different metrics, okay?

To measure a given vector. I'll give you a vector of allocation,

okay? For example, one, one, okay? Or one, two, three between three people or

one, 10, 100 between three people, okay? And I asked you to give me some

metrics to measure it. And this has been done in quite a few

different ways. And we will be taking this view point,

okay? And c is that, as when mentioned, there's

also the notion of bargaining, alright? And the fairness in bargaining,

okay? In d, there are viewpoints on a certain kind of gains like the ultimatum

gain I mentioned like the divide and dollar gain,

okay? Like extensions of a fair cake cutting.

12:35

We'll mention some of these in homework problem but we won't be going through

this, in detail during the lecture, okay? So today, we're just going to focus

on particular angle. And that is, if you give me a vector, I would like turn that

into a scalar. Now, there's no shortage of metrics to do

exactly that. In fact if you look at different existing

notions, some will say this vector will give me a scalar of 0.33.

Some say it will give me a vector of 0.86.

This will turn into 0.01 or 0.41. These are not made of numbers.

They are actually what different matrix will map these two vectors into,

okay? So, even efficiency aside, this one

clearly gives more absolute value to everyone.

even ignoring that fact, just look at the relative representation of the three

users here. People probably agree that this is more

fair than this if you ignore efficiency. But how much more fair, than this?

And therefore, is it worth the sacrifice in efficiency?

That is clearly not understood. If I just compare this versus this,

it's 33 times more fair. But if I compare this with this, then

it's only twice more fair. And indeed, that there are many such

so-called diversity indices in statistics, in sociology in computer

science. For example, if I look at a ratio between

the smallest and largest entry that is one case, okay?

The ratio between the smallest and largest to spread. Or I can look at a

little bit more sophisticated like entropy function.

14:39

This used to measure the amount of uncertainty but also can be used to

measure the amount of variance. Well, speaking of variance, how about the

standard deviation, or may be the second order moment of this vector,

okay? Normalized.

And that's called the Jain's Index in certain networking literature,

okay? A lot of these try to normalize this so

that you going to map whatever vector into a number in the range between zero

and one, where one is most fair, okay?

Now, we have seen another related but subtly different approach where we

provide an objective functions and optimization problem, and then try to

understand what is the resulting optimizer's property.

For example, alpha fair utility function, okay? Utility parameterized by the

parameter alpha between zero and infinity says that I'm going to look at [SOUND]

one minus alpha. This function, if alpha is not one and by

L'Hopital's Rule this function if alpha is one,

okay? If I maximize a summation of these utility functions, I got a utility

function for the entire vector. And the resulting maximizer, x star,

satisfy the so called alpha fair notion. Including proportional fair that we

talked about, one alpha is one. Including max/min fair

that says, you cannot make one user's rate higher or resource higher without

making someone whose already getting a smaller amount of resource get even

smaller resource, okay? And that's called the max/min fair

which happens as alpha becomes very large, close to infinity,

okay? These two are special cases. As you can see already, these two

different approaches, a normalized diversity index and an alpha fair

objective function are actually different,

right? For example, the treatment of efficiency

is different. For alpha fair utility maximization,

efficiency is implicit and bad in it. But still in there and normalize the

index is not effected by the magnitude often,

okay? So, one, one the vector is the same as

100, 100th vector. Can these two approaches be unified?

And can more approaches be discovered? In fact, how many more approaches are

there? Well, let's try something that we saw

back in Lecture 6 when we talked about the axiomatic construction of a voting

theory and of bargaining theory. Back then, we looked at the axiomatics a

treatment of errors impossibility theorem, as well as

the axiomatic treatment of bargaining by Nash, in the advance material part of the

lecture, okay? So, in one case, a set of axioms

that led to impossibility result and another case led to a unique possibility

result. So, what kind of axioms are we talking

about here? We will see that in the advance material

of this lecture, the axiom of continuity. Very briefly it just says that, the

function that maps, a given vector of allocation in to some scalar.

The fairness value, okay?

Should be a continuous function, okay?

Of so called homogeneity that says, if I scale this x by say a factor of five,

five pi vector x, it should give the same as if I was

looking at x. In fact, doesn't matter if it was a five

or any positive constant. So, scale does not matter.

Another way to call that kind of function is called homogeneous function,

okay? So, inefficiency is automatically taken

out of consideration for the moment. And then, of saturation.

This is a tenical axiom where we will skip this on to the advanced material of

partition that says, you can grow the population size and the notion of

fairness still remaining well defined. And finally, of starvation that says,

the allocation of half, half between two users equal allocation should be no less

fair than the allocation of zero, one, which stops one of the two users,

okay? Notice this is not saying that you put allocations should by axiom be viewed

as most fair. It simply says, it is not less fair than

starvation, okay? So, it'ss a very weak statement,

therefore a very strong axiom. It turns out that if you believe these

five axioms, then skipping to the relation we will be able to derive a

unique family of functions F. And that's a vector of allocation to a

scaler representing the fairness value. And this scalar will allow us to do two

things. A, allow us to do quantitative

comparison, okay?

Between two vectors, which is more fair. And the second is scale.

Not only gives you a order comparison, but also provides a numerical scale to

talk about how much fair is one allocation with respect to the other.

So, it is our job now to go through that unique family of fairness function

constructed based just on these axioms of the function.