Welcome to lecture 20 of Networks, Friends, Money and Bytes. And this is on the last lecture of the course. We're going to talk about something that we touched upon a few times in the course, but did not have the time to go through more detail. And that is on the subject of fairness. Now fairness is an important metric in both social-economic kind of network as well as technological networks. So now that we're into the very last lecture of the course, we're going to spend an hour or so to look at quantification of this notion of a fairness. And I just want to highlight that when we talk about fairness today, we're talking about fairness of resource allocation or performance allocation. We're not talking about the fairness of liberty or rights, okay? The discussion around those issues will be quite different. All right, what is fair? Now this question had been studied so many times in so many different subjects. Let's start out with some basic thinking around this question. First of all is equality (of performance) and resource allocation fairness? Now of course, equality would be an interesting and intuitive way to quantify fairness. But this view of equality means fairness is very problematic. For example, suppose I have to choose between an allocation of 1 megabit per second and 1 megabit per second between two iPad users, versus one between 100 and 101 megabit per second between two users. Now which one would you prefer? Which one would you think is more fair? Clearly, this one is equal allocation. And this one is actually slight deviation from equal allocation. If I have to plot this on a chart between the two users, right? One would be here, another would be [INAUDIBLE] 45 degree line, another would be like just a little above. Even though this one does not lie on the 45 degree line, almost all people would still prefer this. And I think that this is a better point, okay? So clearly, when we say that a more fair point is actually a worse point that doesn't give the word fair a very useful interpretation. So if we say this one is more fair than this, that means we somehow have taken into account some notion of efficiency. Okay, we'll come back to this later in today's lecture. And this also says that magnitude matters. And it is the absolute allocation, not just the relative proportion, that makes the notion of fairness useful. And this underlines something called the difference principle in Rawls' theory of justice. In advanced material part of the lecture, we will briefly go through this, one of the most influential thinking in political philosophy in the 20th century on the subject of distributive justice or fairness of liberty and also resource distribution. Now of course a tougher choice would have been choosing between 1, 1 and 1, 2, let's say. Okay, but then maybe the magnitude matters. Okay, in fact, there's an interesting experiment that highlights why the efficiency would matter and this would be a tougher choice than choosing between these two. And that's called the ultimatum game. Basically there two players in this game. And one player would be able to issue the following. It says that here's a dollar, okay? That just drops from the sky on our laps. And I can decide how to allocate this dollar between the two of us and I'll keep x and then U, then have 1- x. Now this may remind you of the framework of negotiation based on bargaining theory by Rubinstein back in lecture five. But in this case, the choice is actually much sharper. If I proposed x and you get 1- x, then you can only have two choices a, you accept it as is or b, you reject it, in which case, the game is over. Okay, there's no further iterative rounds. You will get nothing, and I will get nothing. So this was an experiment they carried out in many different countries and demographics t o see what x would people pick, and what x would the other party accept. And it turns out that, out of the many interesting implication, one is that the magnitude matters. And whether this is dividing $1 or $1 billion, makes a significant difference, okay? All right, so this is the first thinking around of this fairness notion, okay? Is equality fairness? And probably not. If not, how do we understand the impact of magnitude? How do we understand the trade off between everybody much wealthier except that some are more wealthy than others, versus everybody starve to death? And that will be very equal as people all starve to death. Now, another way to think around this, is to think about the different contributions as well as different needs by different people in this network. And whether it's a social economic network or technological network. For example, if this ISP glues together the rest of the internet, okay? Maybe this is a very important note, a very important link, it's contribution is significant. Okay, we may or may not appreciate how much more contribution it is, but by some centrality measure for example, this might be ten times more important than the others. So maybe this ISP deserve higher allocation of resources. And some people have more needs than others too, okay? So how do we think about that issue? Shall we give it to the more capable contributing users? Shall we give it to more needy users? And a good way to think about this is in a course. Okay, suppose in this course I'm going to assign the grade, and that is B grade to all students. It doesn't matter who you are, what you did, what you did not do. B grade for everyone. Okay, so rather than differentiating into say, two students get A+ and then 98 students, I would then get somewhere between A and D, okay, including quite a bunch in D and maybe even a few getting an F grade. I'll say well, let's just be kind and generous. How about let's give B grade everyone. First of all, this is problematic because it may not be providing the right incentives. To the students to work hard. And second, this is hardly a fair allocation of grades. As most people agree, even those who are getting a bump from D to B. They would think that, hey, I just got an unfair B grade. because somehow it does not reflect that grades should be different depending on the performance. I should't ask the A+ students to subsidize my grades. And third, this problem assumes that I can assign as many B grades as I want. But often, in a system, in a network, you have a total resource constraint. There's only that many B grades you can assign while given those constraints. All right so, while we appreciate that different people should receive different amount of performance or resources. We also think that, suppose there are some lazier workers, less capable workers, less underperforming workers. The fact that they are lazy and underperforming doesn't mean that they should just be left to starve to death. And that their families should starve to death. At the same time, very few people would think that these people should deserve the exact same pay or even anything remotely to similar pay as the others. So if I play basketball, I probably shouldn't be getting the same pay as Michael Jordan, even though Michael Jordan some people may view as getting a ridiculous, unreasonably unjustified high pay. But who will say? Well, he's that much better than many of us. So now the question is, most people would agree that just because they don't perform as much as the others, they shouldn't be left to death. At the same time, they shouldn't be getting the same as everyone else. Now the question is how much should they get? Most people say well, they should get at least the basics. For example, very basic amount of food and shelter. Now, how much is basic amount of food and shelter is, an annual vacation to the Bahamas part of a basic package? That means we have to somehow quantify. Because arguing with English usually lead to nothing, as we do not even agree on the scale of measurement. So how can we quantify the notion of fairness. There are quite a few different dimensions. For example, procedural fairness. Too often, the word fairness has been used a disguise to take away rights and liberty. And too often, the output. Whether it's fair or not, viewed by somebody, comes from a procedure that is not, okay? In legal study, in political science, people study procedural fairness a lot. Now we, unfortunately, will not have time to talk about that, even though that I guess is way more important than whatever we'll be talking about. Then there are studies in a lot of fields looking at the different metrics to measure a given vector. I give you a vector of allocation. For example, [1,1], or [1, 2, 3] because 3 people or 1, 10, 103 people, okay. And I ask you to give me some metrics to measure. And this has been done in quite a few different ways. And we will be taking this viewpoint, okay? And c is that, as we've mentioned, there's also the notion of bargaining, and a fairness in bargaining. And d, there are viewpoints on certain kind of games like the ultimatum game I mentioned. I like the divide a dollar game, like extensions of a fair cake cutting. We will mention some of these in the homework problem. But we won't be going through this in detail during the lecture. So today, we're going to just focus on a particular angle, that is if you give me a vector, I would like to turn it into a scalar. Now there's no shortage of metrics to do exactly that. In fact, if you look at different existing notions, some would say this vector will give me a scalar of 0.33. Some say it will give me a vector of 0.86. This will turn into 0.01 or 0.41, these are not made up numbers. They're actually what different metrics would map these two vectors into, okay. So even efficiency aside, this one clearly gives more absolute value to everyone. Even ignoring that fact, just look at the relative representation of the 3 users here. People probably agree that this is more fair than this, if you ignore efficiency. But how much more fair than this? And therefore, is it worth the sacrifice in efficiency? That is clearly not understood, if I just compare this versus this, it's 33 times more fair. But if I compare this with this, then it's only twice more fair. And indeed, there are many such, so called, diversities indices in statistics, in sociology, in computer science. For example, if I looked at a ratio between the smallest and largest entry. That is one case, okay? The ratio between the smallest and the largest is the spread. Or I can look at it a little bit more sophisticated like entropy function. It's used to measure the amount of uncertainty, but also can be used to measure the amount of variance. Well, speaking of variance, how about the standard deviation, or maybe the second order moment of this vector, normalized. And that's called the Jain's index in certain networking literature. A lot of these try to normalize this. So that you're going to map whatever vector into a number in the range between 0 and 1 where 1 is the most fair. Now, we have seen another related, but subtly different, approach where we provide an objective function to optimization problem. And then try to understand what is the resulting optimizer's property. For example, alpha fair utility function. Okay, utility parameterized by the parameter alpha between 1 and infinity says that I'm going to look at, Minus alpha, this function, if alpha is not 1 and by the rule, this function if alpha is 1. If I maximize a summation of these. Utility functions, I got a utility function for the entire vector. And the resulting maximizer, x*, satisfy so-called alpha-faired notion. Including proportional fair that we talked about when alpha is 1, including max-min fair. And it says you cannot make one user's rate higher or resource higher without making someone who's already getting a smaller amount of resource get even smaller resource, okay? And that's called a max-min fair, which happens as alpha becomes very large, goes to infinity, okay? And these two are special cases. As you can see already, these two different approaches, a normalized diversity index and an alpha-fair objective function are actually different, right? For example, the treatment of efficiency is different. For alpha-fair utility maximization, efficiency is implicit embedded in it. But scale invariant normalize the index is not affected by the magnitude often. Okay, so 1, 1 vector is the same as 100, 100 vector. Can these two approaches be unified? And can more approaches be discovered. In fact, how many approaches are there? Well, let's try something that we saw back in lecture six. When we talked about the axiomatic construction of a voting theory and of bargaining theory. Back then, we looked at the axiomatic's treatment of Arrow's impossibility theorem as well as the axiomatic treatment of bargaining by Nash in the advanced material part of the lecture. Okay, so in one case, a set of axioms that led to impossibility result and in another case led to a unique possibility result. So what kind of axioms are we talking about here. We will see that in the advanced material of this lecture the axiom of continuity, Very briefly it just says the function that maps a given vector of allocation into some scaler. The fairness value. It should be a continuous function of so-called homogeneity that says, if I scale this x by, say, a factor of 5, 5 vector x, it should give the same as if I was looking at x. In fact it doesn't matter if it was a 5 or any positive constant, so scale does not matter. Another way to call that kind of function is called homogenous function, okay. So inner efficiency is automatically taken out of consideration for the moment. And then of saturation. This is a tentacle axiom where we will skip this until the advanced material. Of partition that says you can grow the population size and the notion of fairness still remains well defined. And finally of a starvation, and it says The allocation of half, half between two users equal allocation, should be no less fair than the allocation of 0,1, which starves one of the two users. Notice, this is not saying that you put allocation should buy axiom being viewed as most fair. It simply says, it is not less fair than starvation. Okay. So it's a very weak statement, therefore a very strong axiom. Turns out that if you believe these five axioms, then escaping the derivation, we will be able to derive a unique family of functions, F. That maps a vector of allocation to a scalar representing the fairness value. And this scalar will allow us to do two things. A, allow us to do quantitative comparison, okay, between two vectors. Which is more fair. And the second is scale, it not only gives you an ordered comparison but also provides a numerical scale to talk about how much fare is one allocation with respect to the other. So it's just our job now to go through that unique family of fairness function, constructed based just on these axioms of the function.