Hello everybody, welcome back to our unit on Linear Program. Today, we're going to talk about some sort of different types of linear programming problems. It should have all related but not quite the same. So the point is there's actually several different types of problems that go into the heading of linear programming. Now, the one that we've been talking about so far, we might call the full optimization version. Minimize or maximize a linear function subject to a system of linear inequality constraints. Or maybe say the constraints have no solution if they don't. Now, it turns out there are a number sort of related problems dealing with sort of systems of linear inequalities that you might want to solve, that are maybe a little bit easier. And will actually be important, we start coming up with algorithms in the next couple of lectures that will actually be solving algorithm's other formulations. So the first one is optimization from a starting point. Given the system of linear inequalities, and the vertex of the polytope they define, optimize a linear function with respect to these constraints, so you're given a place to start. Another version is one they call Solution Finding. Given the system of linear inequalities, no objective whatsoever, find some solution to these systems, assuming it exists, this is also somewhat easier. And finally, we can satisfiability. Given the system of linear inequalities, determine whether or not there is a solution. So it turns out that these problems are actually equivalent. If you can solve any one of these, you can solve the others. Which is very convenient, because the algorithms we'll be looking at will each only solve one of these versions. First off, it's clear that the full optimization of the problem is the strongest. Using that you can solve anything else you want. If you have optimization from a starting point, you can just ignore the starting point and solve the problem from scratch. If you're trying to find a solution while the optimal solution is a solution and if you merely want to know if there is a solution. Well you try and run a full optimization and it outputs a solution, you have a solution, great. But the nice thing is that you can go the other direction. If you can only solve optimization from a starting point, you can actually do the full optimization. And the problem here is, how do you find the starting point? If you had the starting point you could run the algorithm and you'd be done. But somehow you need a solution to this system and there's actually a clever way to do this, you add equations one at a time. So now we have a solution to the first seven of your equations. We now need to add to make it a solution to the first eight. For that, we need to say, well, maybe your solution, it doesn't satisfy this eight inequality. Well what you can do is you can optimize. So not only do we want to satisfy these seven, we want to make this eighth inequality, maybe, as true as possible. And that one will give you a solution set that satisfies all of them. So to see how this works let's look at an example. We start with this rectangle in a nice corner of it. We know where to add an inequality that chops our rectangle at this line. So what we're going to do is we're going to say, well, we want our sort of point to be as much below this line as possible, that's just a linear optimization question. So we can solve that using our optimization from starting point algorithm. We get that vertex, and what do you know? It's a solution to this bigger system. Next, we want to add this thing as an inequality. So again, we solve our optimization from starting point, we find a solution, we can now add the additional inequality. We add another one, find a solution, and then finally we've got all of our equations in the mix. We now need to do for our optimization and we can do that. So this is basically how you do it, you act one equation at a time. There is a technical point to keep into account. Things are a bit messier if some of these intermediate systems that you're trying to solve don't have optima. They might need one of these unbounded systems where things can get as large as you like. Now, to fix this, I mean it's not hard, you just need to play around a bit. First you want to start with n constraints. That means you actually have a single vertex to start out your system and then we you are trying to add the constraint v.x at least t, you don't just maximize v.x. That would be sort of a problem because that might be unbound. So what you'll do is, you'll add the additional constraint that v.x is at most t. And this guarantees that v.x will actually have a maximum at t and once you find it, that'll be good. Okay, so that was that, let's talk about solution finding. How do we go from being able to find a solution to find the best one? We somehow need to guarantee that the solution we found was the best solution. But there's actually a good way to do that, we can use duality. The point is that duality gives you a good way to verify that your solution's optimal by solving the dual program and providing matching upper bound. So what you want to do is you want to find both a solution to the original program and a matching dual solution. So if you want to minimize v.x subject to Ax at least b, what you can do is you can instead solve this bigger system. Ax at least b, y at least 0, y transpose A at least equal to v. That says x is solution to the primal and y is solution to the dual and then we also want x.v = y.b. So, now we have a matching solution to the primal and the dual. The fact that we have this solution to the dual means that you can't do any better in the primal. And so this actually any solution to this system, if you look at x, that will give an optimal solution to the original problem. Finally, let's talk what's satisfiable. How do you just know whether or not there is a solution? How is this going to help you find actual solutions? Well, we know there's always a solution at a vertex of your problems. And that means that any of these equations that we have are actually tight. Now all we need to do is figure out which equations are tight. And then you can solve for the intersection of those equations using Gaussian elimination. So how does this work? We have a bunch of linear equations, here are the lines where we've got the equalities. And the solution to that system is that hidden triangle here. We want to find a point on this track, so what are we going to do? We are going to pick some equation here and say is there is a solution to not only this linear system, but this linear system where that equation is actually an equality. And for this guy the answer is no, because there is no point in the triangle that also lies on that line. That means that we can actually throw out that line. There are no solutions, that line actually doesn't help us at all. Next we try another one, this guy doesn't work, we throw him out. This guy, yes, there are solutions. They're both inside the triangle and on this line, so we keep that equation around as an equality. And then try another one, what about this guy? Is there a solution to this system where we're on both of those lines? No, the intersection of those lines is not a solution so we keep going, what about these lines? Yeah, if you look at the intersection of those lines it gives you that point, which is a solution. But now we know there's a solution at the intersection at those lines, we solve for that intersection with Gaussian elimination, and we're done. And so by just solving a whole bunch of satisfiability questions, we're able to actually resolve this question and actually find one that satisfies us. So, to make sure that we understand this, if we want to find the solution to a linear program with m equations and n variables. How many times would we have to call a satisfiability algorithm in order for this to work? Well, the answer is we need to do it m times. You need to test each equation once. When you find equations that work, you keep them around. But, I mean, you don't need to test them again. Those are equalities, and you just keep checking the other ones. Each equation is to be tested once, that's a total of m times. But you run this satisfiability algorithm m times and it gives you a solution to actually find a point. So those are the different formulations, next time we're actually going to come back and start looking at honest algorithms for solving the linear problems.