Okay?

It's going to take more than 10 to the 59 years.

That's if we assume that the computer can do 10 billion operations.

Okay?

Now 10 to the 59 years.

There are two issues.

First of all, to the best of my knowledge, the laptops and

the desktops that we use today cannot do 10 billion operations a second.

This is number one.

Second thing, 256 is still very small for an input size.

So, in practice, we'll be dealing with much larger input sizes.

In practice we'll be dealing with computers.

Using computers that can process fewer than 10 billion operations a second,

you can get the point.

If under these simplifying assumptions and generous assumptions,

I'm getting 10 to the 59 years just to, to handle the input of size 256.

You can imagine what's going to happen in practice okay?

So, this is the sense, this is why we are interested in the growth of function is

that I'm interested in how the running time is going to grow.

As the, as the input size grows.

Because this is not where I'm interested.

This is not the input size that's going to concern me.

The, what's going to concern me is that as the input size grows,

how does this function grow?

If it grows like this, I, I go from two to 16 to 256.

And the function grows sub linearly, right?

If I draw that, if I draw the plot here and

the x-axis, I put the input size, and here I put the function running time.

This is 2 here and

this is 16 here, and this is 256 here.

Again 2, 16, 256.

This is a linear, suppose this is a linear function here,

2, 2, 16, 16, so this is going to be the y equal x here.

This function, the log n is growing actually slower than that.

Slower than linearly.

That algorithm B is going to take this running time linear.

The squared one, running is time is square is going to take square n squared but

2 to the n is going to start growing very fast.

This is where 2 to the n is going to happen and this is n squared.

This is the n here and this is log n.

So when I look at the growth of functions like this in these terms.

I can see that algorithm A as input size grows, its running time is growing.

Of course it will grow.

We won't, we cannot, I cannot think of many interesting algorithm whose running

time is not going to grow with input size.

So that is not the question.

The, the running time is going to grow, as the input size grows.

The question is, how does it grow, okay?

This is a nice growth.

We would love an algorithm whose growth is like this.

We can, we, we love an algorithm whose running time also grows linearly.

We can tolerate, and we would be happy also with a,

with a, with quadratic time, n squared.

But, an algorithm that takes 2 to the n, is not going to be very practical, okay?

Again as I said here, if n is 256, if you can handle 10

billion operations per second, it's going to take us 10 to the 59 years.

What is the point of such an algorithm?

Even if it is correct, even if it is elegant,

even if it is written in whatever programming language, it's useless.