0:26

So now we're going to walk through a numerical example.

First we will define the length of the time slots.

Suppose we are using 802.11G WiFi, which transmit at 54 megabit per second.

And here the relevant timing parameter. Okay, each

a single time slot is nine microsecond. Everything is microsecond.

Okay? And then the SIFS is ten microsecond.

And the DIFS is six plus two time slots units which is 28 microseconds.

Okay so now we can look at TB, that is the time slot length for a back up slot

with idle channel. Okay that's the length of DIFs,

so that's just 28 microsecond. What about the time slot for a successful

transmission? That means we have the data frame self

plus waiting siffs plus acknowledgement packet plus waiting DIFs This totality is

the TS. So lets look at the time it takes to send

this data. Now the data consist of a header and then

the payload. The header, and then the payload.

Okay. First of all there is a 16 microsecond

physical layer preamble. You just waste sixteen microseconds as a

guard band at the top. And then there is a 40 bits physical

layer head, a physical head with information about a physical layer

configuration. The first 24 bits is sent at a much lower

speed instead of 64 is sent at six megabits per second so that it is has a

much higher probability of being decoded at the receiver end.

So there is 24 of them over six megabit per second, okay?

Since this microsecond, this is megabit per second the factors cancel each other

out and just write us 24 over six then there is sixteen remaining.

While plus 240 MAC layer, link layer had, okay?

Plus 32. Bit of error correction codes in the link

layer. All these are sent at the rate of 54

megabit per second. Plus of course, the actual pay load.

This part is L bits okay and we're going to later assume that L is

Basically, 8,192 bits, okay? So it's a typical value of the number of,

bits in the payload, okay? So now payload is sent at 54 megabit per

second. So this boils down to 50, 25.33 plus the

payload. Say, this number of some other number

over 54. Microsecond.

Okay. That's the length of TS.

So finally, what about TC, collision? [COUGH] Well there's actually a couple

different variations on how pe, people define TC, but I think that the proper

way is to define it as essentially the same as TS, because you have to wait

until the disappear is over before you can guarantee that the acknowledgement is

indeed not sent back to you. So we have defined these parameters.

Okay? Now let's also define some other

parameter. Let's say the maximum number of back off

stages you can have in exponential back off is three.

So you can multiple by two by two by two then you stop and declare the frame is

lost. The minimum window of time they need to

wait is, let's say two to the four minus one.

Okay? In our analysis, we ignore this minus one

factor but here we can just say incorporate that and that is fifteen.

4:54

We can plug in the formula. First, we can plug in the formula of the

tau and C solution turns out we can solve numerically tau to be 0.0765.

That is basically a 7.65% as the contention probability, the probability

you'll transmit as a single station. Okay.

And then this lead to PTPS calculation, which leads to the S calculation,

together with all those constants. Okay.

The exact blown out form of the formula is in the textbook.

And, or you can just verify that through your own, calculation, okay?

So now, we're going to look at this S as a function of a few things.

First of all, as a function of N, the number of user stations,

or in other words the impact of crowd size.

So now, I'm plotting S in meg-, megabit per second over N here.

If you look at the aggregate, okay,

as, as a function of n for all, then it goes up, and then goes down.

Now goes up is actually better understandable, because I have got more

stations. But it quickly starts to go down.

This is the point where, basically the tragedy of commons kicks in so much that

adding more users will reduce even the total throughput across all users.

And this happen around eight users. And if you look at the total throughput

divided by M, okay,

as in over M. That's the average per station

throughput. Then, they actually always go down.

It never goes up. Why?

Because you add more user and more interference.

What is important is that it goes down so rapidly.

As you go from like two, three users down to ten fifteen users.

It went down from 25 so megabit per second now.

Notice not 54 because 54 is the physical layer of speed.

Okay. After the o, the overhead it goes down to

about 25 realistic speed. This is the theme we'll pick up in the

next lecture. Okay.

In today's lecture we'll notice the shape of this drop.

This drop is dropping very rapidly. Okay.

The, the point of going down all the way to only one or two megabit per second.

So no wonder in a busy hot spot. the average per station throughput is so

low, because despite all the smart ideas, the SMA random access controls strata

commons in a very inefficient way. Now few more charts for example we can

also measure S as a function of aggressiveness.

One way to look aggressiveness is to look at the minimum window size you have to

wait up to that point. Now we that for different size of the

crowd all happens that initially, okay, if you assuming W mean a bigger means you

maked it You make this this contention less

aggressive, then the flufoot actually goes up.

Okay? That's very good.

But at some point it will go down because it is so non-aggressive, you're actually

wasting idle resources in the network. So there's a point beyond which deemed

more polite that it actually hurts your through put.

Again, very typical of a cocktail. You don't want to be too aggressive but

if you're too non-aggressive then you're just wasting time slots.

And as the crowd gets bigger and bigger. Okay, we see that.

The. Range of W mean before it start to band

down becomes longer and longer. That means as the crowd gets long- bigger

and bigger it pays more and more to reduce aggressiveness.

Another way to look at aggressiveness is look at

The maximum number of back off stages that you allow is you make B bigger.

You tend to increase the average contention window size and therefore

become less aggressive and you'll see a similar behavior here.

Okay? as the crowd becomes bigger and bigger,

the impact is more prominent. Okay.

The Throughput actually becomes bigger as you

become less aggressive. Okay.

The impact of B however is less prominent than impact of W mean there.

Finally, as a function of the payload size L.

Okay. We were talking about somewhere around

here. Okay.

So get around 25 mega per second. Now you see a monotonic increase curve,

because more payload means less overhead, relatively speaking.

But this is a misleading chart, because remember all the way back early in the

lecture we did not model the actual interference or collision phenomena

accurately. As the.

Payload gets bigger and bigger actually, the chance of collision goes up.

Because the chance of two packages overlapping in time goes up as it takes

longer to transmit the payload. Okay.

If you incorporate that factor, this actually will start to bend over and

downward. So in summary, what we have seen is that

in wi-fi, interference management is done through random access rather than in

power control accelerator. And a big part of the reason is because

it's operating in the unlicensed spectrum.

There are a few very good ideas, including randomized and exponential

backoff. Including differentiated wait and listen

intervals. Including limited explicit message

passing. Which, by the way, the RTS/CTS is not

always enabled. And that may also explain some of the

inefficiency of throughout in hot spots. But, it's got a big limitation.

Now, we went through, Simple, relatively speaking.

But, still a little bit involved approximation of the throughput, and we

saw that this throughput per a station as a function of n drops rapidly the

performance decrease very fast as the contention intensifies.

Even, as we go up from several users, to just say, ten or fifteen users.

And, this is underlying reason why The performance in hot spot tend to be poor

unless you don't have a large crowd, it so happens.

And we see a fundamentally different way to do distributed coordination in taming

this tragedy of commons. So now we're going to wrap up our

wireless lectures with one more lecture on a very practical important question,

what is the actual speed that I can expect on my cellular, LTE or 3G network.

That will be the next lecture, I see you then .