0:25

And application layer or rate the physical layer.

Later in, lecture thirteen soon we'll be talking about different layers and the

different, metrics of quality on different layers.

It could also be delay. It could be jitter, which is the variation

delay. It could be energy you spend.

It could be the distortion of your video. It could be any of these. .

Although, in most of our lectures, we'll be looking at rate of throughput in the

argument. In general, this is a function whether

it's a increasing or concave function, or not.

We'll say a few words about that in a minute.

And then, you're going to sum up across all the users of the utilities.

This is the sum utility or the so-called network utility, okay?

A network utility maximization or, in short, NUM.

We'll come back to this a few more times later in this course.

That's the objective function. What about the constraints?

There can be all kinds of constraints, technological constraints.

For example, a router cannot support, both large number of degree and large number of

bandwidth per degree. You know, it could be economic

constraints. For example, I am not willing to pay for a

certain cost in order to provide that much bandwidth supply.

And by the way, when we talk about the word, bandwidth, okay we often actually

mean capacity in bits per second rather than, the width of the frequency band

being used. Okay?

Even though a wider band other factors remaining constant implies a bigger

capacity. But these are two physically, conceptually

different terms. But, so happens increasing a lot of people

use the word bandwidth. I guess it's a cooler name than the word

capacity. So sometimes we stick to that incorrect

terminology because it's used almost universally.

Now what about the variables? Well, if these, depends on what's your

argument in an objective function, okay? If these are the arguments, maybe our

variables could be the raise on purpose themselves or it could be some other

design factor that would impact these different metrics.

So maximizing the summation of utility function of some arguments is subject to

some kind of constraint that says the variables denoted by some vector x and y,

live in certain constraint space x and y and so on.

3:05

Now, good to formulate and try to solve, hopefully distributively of many of these

problems. So, let's look at this utility function,

each term. Okay?

Let's say the argument is xi. Could be, say the data rate for a

particular n user I. And the shape of the function is denoted

by, okay, this function u sub i. So what kind of function could this be?

You see three functions, curves here in this plot.

Ux over x is a single variable function. This function a is, let's say a

logorithmic function. We'll see that there is a special fairness

notion associated with log functions in a minute.

So, it is quite nice. It's smooth, increasing, and, concave.

But you see this curve, B here is actually not smooth cuz there is a jump at this

point. Below this point, you get no utility.

Above this point, you get a strictly positive, constant utility,

Okay? So there is a jumping point.

Now, graph C is interesting. It's got some concave.

A convex part, and then it becomes concave to the point of bending completely flat.

Meaning that to give me more amps, it doesn't increase my utility any more.

So it goes through a convex. Part.

Then you went through a concave part, including part of which that is completely

flat. So not only does arguments of

geo-functions can be of various strengths, the shape can also be quite different,

okay. So, in particular, is the smooth, we often

deal with smooth functions. In fact, in all examples, you will see it

will be smooth, but it doesn't have to be. Is it increasing?

Pretty much always increasing because we're talking about something good; it's a

utility function. If you are talking about something that

measures how bad it is against some argument maybe the answer is different.

And third, is a concave nod. Now, increasing the nod is talking about

the first derivative, the utility function.

We say, you know it is non-negative. What about concavity or convexity?

And this is talking about the second derivative, okay.

So, concave means that it is smaller than equal to zero.

Whereas convex means, as you may recall from lecture four.

That the second root is bigger than or equal to zero.

Now of course this is talking about a single variable.

If a multivariable function then, this is a gradiant vector.

And these second derivatives is not a scaler anymore, but the Hessian matrix.

Again back in lecture number four. So,

6:27

Even if you get happier and happier, the rate with which you become happier, will

be going down. So, for example, this kind of function is

still going up, up, up, it never drops. But, how fast you'd go as a function of a

unit increase in x x's that slows down. Okay?

In other words, the second derivative is negative,

Okay? This is an increasing function, but the

rate of increase is actually negative. So.

This is usually the case as a concave function.

But, it doesn't have to be concave at the beginning.

It says, eventually it will bend over but before eventually, it can go actually like

convex and then concave with inflection point.

Like what we observed in some of lecture seven and lecture 8's curves in diffusion

models. Okay?

And, some of the other influence models in the crowd.

So you can also have a sig model function with both convex and concave part.

In general, if a smooth increasing in concave, then it's a much nicer function

to deal with mathematically. Where do these utilities functions come

from? Usually there are three ways, one is you

can run an experiment with real human beings in a focus group study.

Put them in sofa, give them pizza and then say, well, please watch these videos or

listen to these voice calls. And then you rank them, you rate them one

to five stars. And then you vary the different kind of,

of voice cause and medias you present to them.

So these lead to often what's called MOS, Score Mean Opinion Service Score.

Okay? You run psychological experiment.

The second. Reason why certain utility functions are

picked is because of what they represent in demand function.

8:30

Okay? Utility function and demand function, in

some sense, are two sides of the same coin.

Because we're going to model individual decision making as a net utility

maximization. Very similar to what we saw in lecture two

with the auction model as a game. Where we say that, each individual here

says, I will get this much utility, okay? I'm user I.

U I of XI, good but this is now free lunch, there is no free lunch, I have to

pay something for example PI per unit of whatever I'm getting see?

Bandwidth or rate is per second, so for each bit per second I have to a unite

price PI and I'm getting this much XI in volume, so the total cost for me is piXI.

Times xi. This is how much I'm getting, this is how much I'm paying, the

difference is net utility. And that is what I would like to maximize

over my individual choice, x i. Okay, so in the market, you'd give me the

price. I react to the price by picking xi.

Well, if this u is a nice function, a smooth increasing and concave, then this

is a nice concave maximization xi, concave part, linear part.

So we can just take derivative and say u prime of x suppressing the subscript I for

simplicity notation, okay, minus a p. That's the derivative, and let it be zero.

That means, u prime of x = p. If a particular x such as father's

equation, call that x star. Then, that is the induced the demand,

induced to buy this price P given to me. And it turns out that utility's first

derivative is an invertible and a decreasing function because of our model

of utility function. So, x star is u prime inverse as a

decreasing function of the price P. So we call this u prime inverse, demand

function d. So d is a u prime inverse.

Now, give me a utility function. And take the derivative, invert it.

You've got demand function. Give me demand function.

I can also reverse engineer the utility functions.

Coming this way from utility demand function, you can do a simple exercise of

how alpha fair utility function coming up. A particular special case is just utility

being log of x. Okay?

They are the reason and so demand function is just one over p,

Okay? Easily convinced south about that.

And then, in lecture fourteen, we will talk about the other direction, give me a

demand function. I figure out, then try to reverse engineer

the underlying utility function. And the key concept in modelling demand is

so-called demand elasticity. Okay.

13:51

Well see, suppose you tell me there is a vector x let's just say x,

Okay? So vector allocates some good resources

x1, x2 up to xn among the end users competing against each other.

And I say, that this is a particular function, vector is that alpha fair

allocation, vector if here's the definition, okay.

If for any other feasible vector y or such y.

If I look at the difference for each user, Between getting something from x factor

versus getting from the y allocation. And then I normalize it by Exxon through

the power alpha. There's where the alpha comes in.

That's some kind of alpha parameterized normalization of the difference in

drifting away from x allocation to y allocation.

And then I look at the joint impact, the collective impact to all the users in the

system summing over r. If this summation which is a single

scaler, is negative, Okay?

Less than, equal to zero then I say, this vector x is alpha-fair vector.

Now, that's a mouthful for definition. Let's process it again.

We say a vector x is alpha-fair if for all any other feasible allocation y, you can't

give me an unfeasible allocation and ask me to compare, okay.

Any other feasible allocation y, if I look at the collective.

Alpha profit must normalize the drift from x2 such vector Y, is always a bad thing.

Okay, the sum is negative, then, I say this vector indeed is alpha fair

allocation. All right, now you can disagree with the

definition, but it turns out to be a useful definition and therefore a good

definition. So, what about our utility function?

This is a vector not a function. Well it turns out that without proving

this, we'll just state the, the result, it says, if you maximize the following

parameterized utility function parameterized by alpha, so I'm writing

alpha in the subscript for this utility function as a function of x.

If you maximize this function, what does it look like?

What is a branching definition? It looks like it's simply, x to the power

one minus alpha, over one minus alpha. If alpha is not one, okay.

If alpha is one, this denominator is not defined, but you can use Lapatel's rule,

and see that when alpha is one, this actually is log of x,

Okay? Now see the familiar and important special case of log utility.

So, if this is the definition of your utility function,

Then maximizing such utility function will lead to an answer of the optimizer of x

star that satisfies the definition of alpha fair allocation.

That's why we call this the alpha fair utility function.

Another name for it is called Isoelastic Utility Function.

Okay. Why isoelastoc? It gets, gets back to our

notion in the last slide, Okay?

In a homework problem I think you easily verify that if you take this definition of

u of x, you will see the resulting demand function,

Okay? And then that implies the elasticity.

It is actually one over alpha. In particular, when alpha is one for large

utility, the corresponding demand function is just one over p and the corresponding

normalized demand elasticity is just one over alpha.

That is just one. In that case, one alpha is one.

Okay. But in general, for other non-one alphas

is one over alpha s eta. That means the elasticity of demand is all

independent of everything else, okay? Except the shape.

This problem to alpha. That's why its called isoelastic.

Later, we'll see many good uses of isoelastic or alpha fair utility function.

Including the following special case. When alpha is zero, what do we have?

That's simple. We're just looking at the allocation

itself. It's just of sum allocation for example,

some rate in maximization. When alpha is one, that's a log utility

function. It turns out that at least two of what's called a proportional fare.

If you look at this definition here, when alpha is one it means that your

normalization is simply xi itself, you're not trying to skew the shape by any means.

And, whether you like the definition or not, you can see why this is called alpha

proportional fair'cause proportional back to normalized by X.

So, when alpha is one, you have this special case called proportional fairness.

Now, when alpha's really big, for example, alpha approach infinity, then it turns out

that it approaches what's called a max-min fair.

I will later talk more about fairness, so let's leave it just like that for the

moment. And conclude this part of the video module

with simple application of this utility model,

Okay? Whether it comes form focus group study or

fairness or demand elasticity. Apply this simple utility model to talk

about a very important phenomenon called the Tragedy of Commons.

Now, this was first formally mentioned by Lloyd a long while back, 1833.

And then codified and popularized by Harding's article in 1968.

Now this article had other kinds of provocative and somewhat controversial

arguments. Tragedy of Commons is only one small part

of it but we have seen a liberal use of this term, tragedy of Commons, quite a few

times. In lecture one, when we talk about power

control, interference. That's the trade of commerce in the air.

When we were in lecture two okay, transferred commerce in terms of second

prize correctly internalizing the externality imposed on other option

competitors. In lecture, I think six, okay, voting

theory, okay. The K degree of externality imposed on

other voters and the need for something like voter count.

And later, in chapter lecture, I think, fourteen for TCP, fifteen for P2P and

eighteen for WiFi. Okay?

In all of these cases for tech networking discussion we will see a different

manifestation of Tragedy of Commons but the reason it was called Tragedy of

Commons was because it was a, a thought experiment, okay. Just like this binary

number experiment in lecture seven. It says that suppose that you've got a

piece of common grassland as the commons, okay, and there are different farmers and

each farmer says okay, should I get another cow or not?

Okay, here's a cow. Well, it looks like a, a mouse, but let's

say it's a cow, right. So, should I get a cow now?

Well, if I get a cow, the cow's going to need to eat grass, , okay.

But it, you know, the cost of eating the grass is like if there are n farmers, is

like one over n, proportional to that. But the benefits to me of the milk and

maybe selling the cow for its meat later down the road is one, one unit, okay.