In this video, we'll learn how to make solutions faster,

and how to actually measure the running time.

In fact, it's very useful to know which parts of your code are fast,

and which are actually eating up all the time,

and that's is also commented down by the connotation.

Imagine that you have some codes for which you've got the bigger bound,

and you want to know how much some particular line contributes to the bounds.

Let's say that goes some function to doSomething(),

and is placed inside two loops,

one until n and the other until m. So, in this case,

it's easier to see that our particular line contributes big O of

n times m times how many operations does the doSomething function.

Then, it may be really important to performance.

If the total complexity is big O of n square times m,

and the complexity of doSomething function is big O(n),

then its contribution is big O of n times m times n,

which is big O of n squared times m,

the same as the total complexity.

Of course, the whole program may have a logic constants,

but you won't get the better asymptotically without optimizing this particular line.

For example, by repeating it less

times or by making a function of doSomething itself faster,

or it could be not so important.

If the overall complexity is big O of n cubed m,

then our line is already and test faster than something else.

And so, optimizing is useless.

Even if you just take it away,

there is still some parts, which takes big O of n cubed m iterations.

And in fact, in this part that's holding everyone back not our doSomething function.

In fact, knowing which parts are the

slowest is very important in making a solution faster.

Let's imagine a general situation.

You have some solution, but it's too slow.

Either you've estimated the number of operations and is too

high or you just send it to the testing system,

and got the timing indexing that projects.

Of course, you could just invent another fast solution from the scratch.

But more often, you could just improve your current solution.

First, you should always try to improve asymptotically.

Not the constant behind big O,

but the dependence on the size of the input.

Because, it's only these improvements that are rarely efficient.

And you should try to speed up only the slowest parts of the solution,

which have the bigger bound the same as the whole solution,

because other parts already asymptotically faster.

If that fails and moreover,

your solution is just a bit slower than needed.

For example, one and half seconds against the one second of time limit.

Then you might try to optimize the constant hidden inside the big O.

But otherwise it's useless.

There is no sense in replacing two additions by one if there is 10 to 12th of them.

That will never pass anyway.

Usually, there is no need in this at all,

as asymptotically optimal solutions work much faster than units,

and others work much slower.

If you still decided to optimize the constant,

you should also do it where it counts.

One way is to get rid of heavy operations or make them less frequent.

In particular, you should delete

any substantial debug output as it's unnecessary and very slow.

Another potential fruitful way is not recompute on something twice.

If it's really at a zero spot,

then it could speed you up like two times.

So far, we've only theoretical bounds on the number of operations,

but you could also measure the actual running time.

But that has one downside, we need to have an already implemented solution for that.

It could be done, if we've a distance system.

If you're just on a solution,

you most likely will get the time on the worst case,

but it depends on the rules.

Some platforms like Codeforces even have

a remote interface where you could run your program as though it's testing.

But, it could also do this on your own computer.

But for that, you will need the worst case test.

And in fact, performance on your machine could be different from

a performance on a distance system because it depends on environments.

Particular machine specs, operating system,

version of the compiler, and so on.

However, locally if got more power,

it could measure different parts of your codes and those is results,

waste attempts on a testing system.

Locally, the most simple thing to

do is to count number of times, someFunction() has called.

You could just take some variable,

and increase it each time you go in that function.

It could also do this just at some particular place in the code.

And in the end, you could just output that variable and see the precise number of calls.

It could help if it's hard to estimate theoretically.

You could measure during the time of the whole program with

the standard system iterators like time on UNIX-like systems.

But you also could measure the time from inside the program.

In any language, there is a system function,

which adorns elapsed time.

So you just tell the time before and the end,

subtract from the current time.

In this way, you could measure not only that time the whole session

takes but some particular parts of it.

And a serious fool, if you want to know which parts are actual slow and which are fast,

there are also special programs called Profilers,

which measure a running time and number of calls for each function in the code.

And that's just another case when well structured

code benefits because if you fill no functions,

then it will give you no useful information.

But if you structure it well each function

contribute some amount of time and you'll see it.

So we've talked about the time limits,

but there is also memory limits.

Your program should not take more memory than it stated in the statement.

We could do something like the notation here.

But in fact, the memory limits are usually much weaker than the time limits.

For example, if you do too many appends to lists,

inputs most likely will break the time limit first not the memory limits.

The only common cause of getting a memory limit exceed project is launch array.

But you could explicitly calculate the size of an array by just

multiplying dimension and the size of the variable.

You just need to know how much each variable takes.

So, to summarize what we've done about the time complexity,

your solution should work fast on any test including the worst-case one.

Now, we know how to get big O bounds on the number of unit operations.

And you should always appends as bounds before implementing solution.

Otherwise, you could waste much time on the solution that clearly couldn't pass.

To check if the solution is fast enough,

you should take the limits on the input variables from a statement,

and plug it inside the big O bounds,

and compare the number with the number of operations that could be run per second.

That's a cost estimate but nearly always it's not.

Speed up only performance important parts.

Even if you could greatly improve something else,

it will not affect the running time.

First of all, try to make solution faster asymptotically.

Optimize constants only as a last result.

Sometimes, it could be useful to not just estimate the number of

operations but to actually measure running time especially when dealing with constants.

And that's all for this lesson. So, good luck.