0:03

Given the specific definitions,

we do have mathematical tool that will help us get a start on classifying

problems according to their difficulty

called reduction and that's what we want to talk about next.

So, what we want to know is which problems are in P. Well,

that's the easier of the questions.

That's the ones that were solving with efficient algorithms.

At least we know they're in P. But really the question is,

which ones are intractable?

Which ones are in NP but not in P?

And that's difficult to know and no one's been able to show that even one problem is

in NP but not in P. And that's the basic answer that we want for so many problems.

Can we just solve it on any mobile device or any computing device?

Or are we going to need 10 to the 48th UBER computers?

So, in order to get going,

what theoreticians have done is to start with a reasonable assumption.

And so that assumption that we're going to start with is that satisfiability.

The bullion simultaneous satisfiability problem.

We're going to assume that that one is intractable.

That seems like such a fundamental problem and nobody

knows a better approach than to just try all assignments of the truth variables.

Now assuming that a particular problem is

intractable already is assuming that P is not equal to NP.

So bear that in mind.

So now, we have at least the possibility of under

that assumption proving that maybe other problems are going to be intractable.

So again we have a brute force algorithm for finding

solutions for any SAT instance but no efficient algorithm does so.

So it's reasonable to assume that SAT is intractable.

So, what we're going to do now is use

that assumption to prove our relationships among problems.

Now maybe the assumptions is wrong,

maybe P equals to NP and they're all tractable.

But if we believe that P is not equal to NP we just have

that one assumption then we can prove things about other problems.

And so that's what we're going to look at next.

If SAT is intractable,

which again is equivalent to assuming P is not equal to NP.

Then which other problems are intractable?

And to do that we're going to use a tool called polynomial time reduction.

And this is the definition,

if we have two problems X and Y we're going say that X poly-time

reduces to Y if you can use

an efficient solution to Y to develop an efficient solution to X.

We know how to solve a Y we can use that to solve X,

we call it X reduces to Y.

So a typical reduction is the following.

So I have an algorithm that solves Y,

it's guaranteed to solve it no matter what the input is polynomial time.

Here's my method for solving X.

So first thing is I'll use an efficient method to

transform whatever instance of X I have to solve into an instance of Y.

Then I'll use my efficient method that

solves Y and that's going to give a solution of some kind.

And then I'm going to transform that solution back to a solution of X.

So that's an effective and efficient method for solving

any instance of X given an efficient method for solving Y.

It's like what we use when we use a library method when we are in modular programming.

We compute the exponential of X by calling

the math library and that solves that efficiently and we use it,

and then any program we have that calls that,

we have an efficient solution for that.

So this is just a mathematical formalism of

that wrapped around the idea that Y has to be guaranteed to

solve any instance of Y in polynomial time and that

the transformations have to be guaranteed to run in

polynomial time for the whole thing to work.

And there's lots of ways to extend this.

This is the simplest way because we have polynomial time to play with.

So for example we could use a polynomial number

of instances of Y and we could do lots of transformations,

as long as none of the costs can be exponential in any situation.

But we get pretty far just with this simple concept of reduction.

Important point about polynomial time reduction is that it's transitive.

If X reduces to Y,

Y reduces to Z then X reduces to Z.

And again if you think about it in terms of using

a library program that one might use another library program and so forth.

So that is if X reduces to Y then that's the diagram that we had from the previous page.

But if Y reduces to Z you can do the same thing

for the solution of Y you can transform it into instance of Z,

use your efficient method for solving Z,

transform that to Y and then transform that result back to X and so forth.

So with this little diagram you can see that polynomial time reduction is transitive.

And we're going to take advantage of that in just a second.

So, from a theoretical point of view there's two ways to exploit reduction.

So the first one is if I have a new problem and I want to solve it,

I can find a problem that I know an efficient algorithm that

solves it like a library program and I can say how does that relate to my new problem.

If I can transform my new problem into an instance of this problem Y,

then I can just use it.

Polynomial time reduce X to Y and

that efficient algorithm for Y is

going to eventually give me an efficient algorithm for X.

Now that's an algorithm design through

reduction and we're not emphasizing it in this lecture,

but any time you take a course on algorithms you'll see many applications,

where we take an algorithm like sorting,

we take another problem like removing duplicates reduce it since we know we have

an efficient algorithm for sorting we can

get an efficient algorithm for this other problem.

So, algorithm design is a very important use of reduction.

But in today's context,

we're going to use reduction to establish intractability.

So now I have a new problem Y,

what I want to do is find a problem X that SAT reduces to.

Then find a reduction from X to Y,

that gives me a reduction really from SAT to Y,

SAT to X, and X to Y.

And that tells me that if I had an efficient algorithm for Y,

I would have an efficient algorithm for X.

And I would also have an efficient algorithm for SAT,

but that's a contradiction because I'm assuming that SAT is intractable.

So if I have these reductions,

I can't have an efficient algorithm for Y.

That's a critical tool for this lecture,

but actually just think about the easiest option.

If I have a problem Y,

if I can reduce SAT to that problem and I'm assuming that SAT is intractable,

then I know that Y is intractable.

Because an efficient algorithm for Y would mean I have an efficient algorithm for SAT.

So let's start with that.

For example, here's an easy proof that

SAT poly-time reduces to integer linear programming.

So remember SAT itself simultaneous boolean sums of values are true and false.

So that's an example of an instance of SAT,

and that's an example of solution.

Interger linear programming is solve simultaneous linear inequalities,

with variables to 0 or 1.

So, here's a reduction from SAT to an instance of integer linear programming.

All we do is,

every time that we have a not of boolean variable,

we place it by one minus one of the 0 or 1 variables.

So, note X0 goes to 1 minus T0,

and then X1 goes to T1,

X2 goes to T2 in the first equation.

And then rather than equals two,

we say greater than or equal to 1.

And then you can see the variables are 0 or 1,

that inequality is going to be true,

if and only if the boolean sum is satisfied by the way that these things are set up.

So, that is the T of I zero if it only XI is false and TI is one if and only XI is true,

and those inequalitys are exactly equivalent to those boolean equations.

If we can find a solution to that interger linear programming,

then we can just use that transformation to get a solution to that instance of SAT.

Given any instance of SAT,

we can make an instance of integer linear programming as easy trance.

Think of having a a library method that solves integer linear programming,

and it requires that we give inequalities and variables that are 0 or 1.

So, we take our satisfy ability and convert it

into that using the simple one minus thing for not's.

Then the result that we get back gives us zeros and ones and we

just go through and convert them to trues and false to get the solutions to SAT.

So, what this means is,

that if we had efficient solution to integer linear programming,

this would give us an efficient solution to SAT.

We're assuming we don't have an efficient solution to SAT,

so therefore we have a contradiction and therefore there be

no efficient solution to interger linear programming just under that assumption.

If SAT is intractable,

so is interger linear programming.

And that's interesting for sure,

but what's even more interesting is,

many problems have been proven to be intractable if SAT is intractable in this way.

And these are just examples of problems,

and you can look up details of many of them.

The subset sum problem, partition problem,

bin packing problem, all of these are

problems many related to graphs and others two sets of numbers.

So, TSP is in this group of problems.

And these are very classic problems that

people were trying to solve even as late as the 70s.

And what happened was that Dick Karp,

proved that if under the assumption that SAT is intractable,

all of these problems are intractable.

And for each one of these arrows,

he developed a proof.

We're given an instance of the one that the arrow points to,

if you had an efficient solution to that

you could get an efficient solution to the one that points from.

And then therefore since it goes all the way back to SAT by transitivity,

all of these problems are intractable.

So, Karp's paper on that established the idea that,

we can classify these problems at least in this way.

If that's intractable, they're intractable.

And that was a great step forward as it's a way to establish,

what we think about these problems being difficult.

All you have to do is agree that maybe SAT is difficult

to understand that all of these are also going to be difficult.

And actually this has been going on for quite some time,

for going on five decades now in so many fields of

study where certainly computation plays a role.

Where there's problems, it seems like there's

computational solutions to protein folding and chemistry,

financial markets with friction and economics.

This is just a representative list of problems,

that people would like to have efficient solutions for.

But they've been proven to be intractable,

if SAT is intractable,

just by reduction from some existing problem that has this property.

And the list is amazingly long.

In fact, there's thousands of scientific papers per year proving

that problems people would like to solve are intractable if SAT is intractable.

It's a very important tool in trying to understand that just under

the one assumption that there's no guaranteed polynomial time algorithms for SAT,

there's not going to be a guaranteed polynomial time algorithm for any of these problems,

huge number of problems.

It's very important to

understand for anyone trying to use a computer to solve a difficult problem.