0:29

So in order to prove that two problems, X and Y, have the same complexity or

the same research requirements, first we show that X linear-time

reduces to Y and then we show that Y linear-time reduces to X.

And that way, even if we don't know what the complexity is at all,

we know that X and Y have the same computational requirement.

And this is what we just did for sorting and convex hull.

In this case, we know that both of them take time proportional to N log N.

But the reducing one to the other, In both directions

shows that they're equivalent with respect to computational requirements.

In the one case, [COUGH] sorting to a convex hull

gave us the lower bound, and the other case,

reducing complex hull to sorting gave us a useful algorithm.

But together they’re useful because it helps to classify those problems.

1:39

Now, there is a bit of a caveat to this.

And this is a little bit of an idealized situation,

but it actually is something that can lead to trouble in the real world.

So we have these two problems and we have this two propositions

that they linear-time reduce to one another.

Now, it could be that a big software engineering effort, there's a lot

of people making decisions, so we found that they have the same complexity.

But maybe some system designer has a big project and

there’s a lot of things and they need both sort and convex hull.

And one programmer is charged with the job of implementing sort and

understands it well, could do that using convexHull and learn this is a cool trick.

And the other one knows the grand scan,

It says okay, I'm going to index convexHull using sort.

And that's in a big system, and that's going to lead to a problem.

It's an infinite reduction loop, which certainly is going to lead to problems.

Whose fault?

Well, that would be the boss.

Alice and Bob just did what they were supposed to do.

And you say, well, someone could argue Alice maybe could have used the library

routine for the sort, but you get the point.

For a more complex situation, this definitely could come up.

Just to show the power of this kind of technique and

how it relates to research that's ongoing into computer science,

let's look at some simple examples from arithmetic.

So here's the grade school integer multiplication, let's do it with bits.

So I've two N-bit integers, I want to compute the product.

And this is the way you learn to do it in grade school.

For every bit in the bottom you multiply it by the top and

then you add them all together.

3:43

And that's going to take time proportional to N squared.

You have N-bits and you have N rows, and so

that's time proportional to N squared.

Nowadays, people are doing integer multiplication on huge

integers because mathematical properties of integers play

an important role in cryptography and other applications.

So people are more interested in algorithms that are faster

than quadratic for something like integer multiplication.

4:21

[COUGH] Now, One thing that you can do,

first thing you can do with reduction is people have figured

out that [COUGH] you can take integer division, or square, or

square root and all different other integer operations and

reduce them to integer multiplication.

So you can show that all these, and vice versa, and so

you can show that all of these things have the same complexity,

even though we don't know what it is, just by simple reductions one to the other.

So and you could imagine how to divide, to multiply and

so forth, these have been done in all different directions.

5:07

So now the question is, so now they're all the same

difficulty as the brute-force algorithm, of brute-force algorithm.

Well, people studied that for a long time and

actually one of the [COUGH] earliest divide and

conquer algorithms by Karatsuba-Ofman showed that

the integer multiplication could be done in time, N to the 1.585.

And it was divide and conquer redivide the integers in half and

find a clever way to combine it to [COUGH] reduce the exponent.

And people have been working on this, you can see, through the decades actually,

there's a big break through just in 2007,

that is getting much closer to N log N,

although there's no known lower bound for this problem.

Could be linear, so people are still going to work on it.

6:12

maybe useful for a very large N or for different ranges of N.

And actually, in practice, there's multiple precision

library that's used in many symbolic mathematics packages,

has one of five different algorithms that it uses for

integer multiplication, depending on the size of the operands.

And again, in applications such as cryptography,

the N can be truly huge, and people want to do this computation.

6:44

So [COUGH] we don't know what the difficulty of integer multiplication is,

we just know that all the integer operations are described by this

one thing.

And it's similar for a matrix multiplication,

another famous problem that people are still trying to understand.

And again, the secondary school algorithm.

To compute the entry N, row I and column J, you compute the dot

product of the i row of one argument and the J column of the other.

Dot product of that fills that entry, and you do that for every entry.

So that's going to be time proportional to N cubed because you have to

do N multiplications for each of the N squared entries in the result matrix.

And again, there's all different kinds of matrix operations that can be

proven to be equivalent in difficulty to matrix multiplication through reduction.

7:44

And so that's the of column matrix multiplication is all these other things,

like solving systems of linear equations, and determinant, and other things.

But how difficult is it to multiply two matrices?

So again, reduction gives us a big class of problems to make it even more

interesting to know the difficulty of this one problem, and

then research on the difficulty of this one problem has been ongoing.

In this case, running time is N cubed for the brute-force algorithm.

8:21

And who knows when that was discovered, I don't know,

18th century or 15th or something.

So once computers got involved, Strassen's famous divide and

conquer algorithm, like Karatsuba's for integer multiplication,

breaks it town through divide and conquer, and gets the exponent down to 2.808.

And people have been working through the years to find

successive Improvements to this last one.

It went from 2.3737 to 2.3727, which doesn't seem like much,

but maybe one of these research results will be a breakthrough that will

give us an efficient new algorithm for all of these problems involving matrices.

9:27

So when we started this lecture,

we started with the idea that it'd be good to classify problems.

But it's tough to actually classify them because this

new research involved even for

things as simple as integer or matrix multiplication.

But at least what we can do is define complexity classes,

even though we don't know what it is, we know there's a lot of

important problems that have that same complexity.

This really important that we're going to talk about in the last lecture, called

the NP-complete problems and all kinds of important problems that fall into that.

But as the time wears on, researchers have found ways to put many,

many problems into equivalence classes.

So at least we know that those problems are the same difficulty,

even if we don't know what it is.

This is actually called the complexity zoo,

it's really a large number of complexity classes.

10:38

Now, a lot of these are [COUGH] definitely

theoretical devices that are only important in filling in

understanding of some important parts of the theory.

But many of them like matrix multiplication,

integer multiplication are very important to practical algorithms.

So there's certainly a huge amount of research still

going on into trying to understand computational difficulty of problems.

11:26

We designed algorithms that made use of priority queues in effective ways,

that's a reduction.

And lots of algorithms for sorting, and

really that's the idea of reusable software modules.

We find fundamental problems and

we develop implementations that can be used by clients.

Every client using an implementation is doing a reduction.

And the other thing is that reductions help us determine the difficulty of our

problem and guide our search towards finding efficient solution for it.

12:04

So in particular, when we get to extremely difficult problems,

we'll know whether or not we should look for an exact algorithm or

we should find something that doesn't quite solve the problem.

And we'll talk a lot more about that in the last lecture.

And in all of these studies, in the complexity zoo and in any algorithm that

we're going to develop for a complicated problem, what we're going to want

is reduction to link the new problem to problems that we know how to solve.