0:01

A general theme that will be recurring in almost every single lecture in this course

is that we will see individual behaviors, often driven by self interest, aggregate

into a state across all the users, a global configuration in a social, economic

or technologic networks. And sometimes, it can aggregate into a

fair and efficient configuration. We try to quantify efficiency and fairness

in the course. And such, a phenomena of local behavior.

Aggregating into a global configuration that is desirable.

It's going to be helped by some kind of feedback signals, which could be say a

pricing signal, or maybe a congestion signal, or maybe a collision signal.

Some of these feedback are explicit. Some narrow element tells you a specific

action that you need to take. Sometimes that's implicit.

You measure something, for example the current SIR at a time T and then use that

implicit feedback to address your own individual behavior in the next iteration.

1:22

Now we're going to understand the specific behavior in distributed power control,

from two angles now. And in that sense we're using DPC as a way

also to introduce us to the terminology, and the basic notions in two powerful

mathematical modeling languages. One is, the language for, constraint

decision making called optimization theory.

And the other, is the language for strategic thinking by intelligent agent

called game theory. Now lets start with optimization theory.

This is a word that we use in our daily language.

Alright. We try to optimize something.

We try to optimize our time, our, holiday schedule, our work schedule.

And if you think about it. In that optimization, you have some degree

of freedom, which we will mathematically call optimization variables.

You have some objectives you wanna maximize or minimize.

For example, maximize your employability. Okay.

Your happiness. Minimize the cost it takes to finish some

task. And then you have some constraints.

Without constraints the problem will be too good to be true.

This could be constraints, on the time you have on the money you have on energy you

have. For example when you look at your schedule

this weekend. You may say, well I can pick my variable

to be spend the time taking a course, like this one.

Or spend the time to watch a movie. An objective function may be, well, make

me as happy as possible. And the constraint, say, is you only have

24 hours in each day. Now we will see a mathematically precise

unambiguous language called optimization theory.

And in each optimization problem, there are four main data fields.

One is, of course, the objective. Now, in the case of transmit power control

in cellular networks, the objective is to minimize the sum of the individual

transmit powers. So we will write, minimize the sum of the

PIs over Is. So, this is a power minimal configuration,

subject to the constraints that you have to achieve, the target SIRs for all the

users. The SIRs for each user indexed by i must

be no smaller than the target gamma i, and you need this to be hold, to be held not

just for single i but for all i. And then you need to vary the degree of

freedom, in this case obviously it is the set of transmit powers.

And everything else are constants including the channel condition GIJs.

The noises ni for each receiver and the target SIRs gamma.

Towards end of this lecture we also see what will happen if these target SIR

values become variables as well. But for now they are held constant.

Now once you have an objective function, a set of constraints.

And you know what is variable and what is constant, then you have an optimization

problem. Pictorially, what we are looking at in

this case of transmitter power control is visualizable in the following cartoon.

Now suppose we have just two users, so can draw in the 2-dimensional plane in a piece

of paper or slide. And the first user SIR is on the X axis,

the second on the Y axis. And we have a region, okay?

So shaded region here, that denotes a set of feasible SIR values.

Now ideally you want SIR for both users to be high.

But as you know, because of interference, that is not possible.

So let's just look at those SIR1, SIR2 values that are possible.

And we call these the feasible SIR values. So any point inside this region, okay, is

a feasible point. But there are also inferior points because

I could find a way to increase SIR for both users.

Okay. So any points in this region will be

strictly better than this point. At the same time, every point outside the

boundary is infeasible. What about the points exactly on the

boundaries? We call those points Pareto optimal.

Now clearly there are infinite number of these points on the boundary and they are

all Pareto optimal. In some sense, they are not directly

comparable. Points inside are feasible, but inferior.

Points outside are infeasible, so don't even worry about those.

Our job is to find a point that's at least on the pareto optimal boundary.

That means you cannot increase one dimension without hurting the other

dimension. Those points formulate from the boundary

on this region. Now which point to pick then.

That would depend on the objective function.

You have to impose some other objective function in order to pick exactly the

point that you like most. Now later we will see, this theme of trade

off. Okay, tradeoff in this case between two

users, SIR. And in general, the tradeoff between any

competing users in a social, economic, or technological networks.

And we will always want to operate on the parietal optimal boundary.

At least pick a point that is parietal optimal.

You cannot make one user happier without making, another user, less happy.

9:44

Pi Gi - gamma summation of J not equal to i.

Pi Gi J + Ni Okay. Greater, then go to, zero for, all the i.

What kind of functions is this, in our variable.

Remember G and N gamma a constants, on variables P, well this is just some

numbers multiply Pi, this is just some number multiply PJ, and this doesn't even

depend on Ps. So this is actually an Fi function.

Hm, or when they say linear function, so we are talking about minimizing a linear

function, P1 plus P2, da, da, da. Pn, subject to the constraint, which is

also linear function in our variable P. When we minimize a linear function subject

to a linear constraints, we call that a linear programming.

Now, this programming has nothing to do with computer programming codes.

It refers to a mathematical program. So, linear programming, this terminology

dates back to 1940s refers to minimizing linear functions subject to linear

constraints in your variable. Of course if gamma is also variable, then

this is not a linear function. In both gamma and p But that's not our

worry today. Our problem today is fixed gamma, just

burry the ps. And this is an arrow P.

Now LPs are easy to solve, in fury and in practice.

But, as we will see in lecture four, when we talk about Netflix Prize and

recommender systems, we will see that, the watershed between easy and hard

optimization problems is actually not linearity but, so-called, convexity.

We will come to that in a later lecture. All right, so now we have defined what the

optimization problem is and we have recognized that it is a linear programming

problem. How can we solve it?

In fact, if you only need to solve this in the centralized computer, then there are

many ways to solve this computationally efficient way, but if you want to solve it

in the context of cellular network with distributed action, and no explicit

message passing between the base station and the mobile station, then that's not

easy. In the advanced material part of the

lecture, or in section 1.4, advanced material section of chapter one in the

textbook, we will see the proof of convergence of the dpc algorithm to an

optimizer, of this problem. In other words, the DPC algorithm is

actually a distributed solution method, to solve this specially nailed programming

problem, in the physical context of wireless cellular network.

14:47

feasible solutions. Okay.

Now, what I mean by, no worse than. Depends on whether you're talking about

minimization or maximization. For a minimization problem, it means that,

it is at least as small as any other feasible solution will give you in the

objective function. So, if you pick any other feasible X star,

stick into the object of function, which is denoted as f in general.

Then F evaluated at this X star vector will be less then or equal to that unless

if you're minimizing an object or function.

Of course if you're trying to maximize a function, that the bigger the better, then

we say. X star, evaluated, at X star.

For this object to function, we'll give you an value for a problem that is at

least as big as any feasible X vector. This then we call, optimal solution.

Okay. As you can see sometimes there is no

optimal solutions. Sometimes there's one and sometimes there

are many in fact infinite number of optimal solutions.

Later we'll also try to make a differentiation between globally optimal.

Verses local de optimum. The definition we just talked about

comparison with any other feasible solutions that is global optimality.

If you say that this is no worse than any other feasible solution in the smooth

neighborhood around this point than we call that solution locally optimum

solution. So these are the terminologies about a

feasible solution. Globally optimal and locally optimal

solution, that we will be using again and again in the future lectures.