In this lesson, I'm going to give a brief introduction to linear programming. Linear programming is one of the most important optimization techniques and there's a lot of theory behind it. There are many books written about it but for us in this course on approximation algorithms, we're just going to treat it as a black box. So, that means we just need to know what a linear program is and we need to know that it can be solved efficiently, but we don't need to know the algorithms behind it. We don't need to know the algorithms that actually solve a linear program. So, what is linear program? Well, a linear program is a mathematical optimization problem, where we're given a number of variables, in this example here we just have two variables: X_1 and X_2, usually you have many more variables and we want to optimize some function over these variables. Okay, in this case, in this example, we want to minimize a function of these variables and then we could see this function that we want to minimize as a cost function. Often also the term objective function is used and sometimes instead of minimizing the objective function, maybe we want to maximize the objective function. Okay? Besides that we want to optimize this function, we actually also have to make sure that what we choose for the variables, so the values that we take satisfy certain constraints. As you can see here in this example, we have four constraints. For a linear program, what is crucial in this formulation is that both the constraints and the function that we want to optimize are linear. No quadratic functions or anything like that. Okay? So, in a linear program, the objective function and the constraints are linear. So, here's some examples of things that are not allowed. So, here this would not be a good cost function because as you see there's a term that multiplies two of our variables. So, it's no longer linear in the variables, here's another example where you see again that we have something which is not a linear term. So, for the cost function this should be the case but also for the constraints. So again, something like this is not allowed and also something like this which is a combination of two linear constraints with an or between them. That's not allowed. What you can have for the constraints is the number of constraints that should all be satisfied but you're not allowed to have this constraint or this constraint is satisfied. Okay. So, that's linear programming and a good way to think about it is to visualize it and to look at the geometric interpretation of linear programming. Again, let me do this for two variables because then I can draw nice pictures. So, let's look at the following two-dimensional linear program. So, we want to find values for two variables: X_1 and X_2. So that means that the solutions we can represent as a point in two-dimensional space. We have to find an X_1 value and we have to find an X_2 value and together they define a point in my two dimensional solution space. Now, let's look at the constraints that we have. For instance, if you look at the bottom constraint, X_2 is at least 0. Well, that simply means that all the valid solutions must lie above the line X_2 equals 0. So, they must lie in this half-plane. Actually something I like that is true for all of the constraints. So, if you look at the second constraint, which is here then okay I can draw the line 2X_1 minus X_2 equals five, that's a line and now the constraint set that this should be at most. So, that means that our points should lie on a certain side of this line. Okay? We can do that for all the constraints so every constraint will give us a half-plane. So, here you see that the third constraint, the fourth constraint and so on. Okay? So, each constraint gives us a half-plane or if we would have more variables like in three for three dimensions we get a three-dimensional point and then are half-planes or half spaces so we get some kind of polytope in possibly some high dimensional space. Our solution or a solution that satisfies all the constraints must lie in the intersection of all these regions. So, in this case, what you see is that the intersection is here and this intersection is called the feasible region because a point in this solution space is a valid solution, a feasible solution if and only if it lies in this feasible region. Okay? So, this represents you could say all the constraints. Now let's look at the objective function or if we want to minimize, let's say the cost function. Well, what we can do for the cost function is actually since we have a linear constraint, it means that the cost function says that we want to go as far as possible in a certain direction and as you can see here, what we wanted to minimize was minus X_1 plus 4X_2, well that means if we switch the signs because we want to minimize, we get a vector one minus 4 and we want to go as far as possible in that direction. Okay? But of course our solution should lie inside the feasible region. So, in this case this would be the optimal solution. This is the solution to our linear program. Okay. So, if you have a linear program then the solution can be of different types. So, you could say the standards or the most easy type is if you have one unique solution. So, that's what we saw before, so we have a feasible region and there's one unique point in the feasible region that gives us the optimal value. There are more possibilities. It could also be that there's no solution. So, here you see three-half planes and as you can see the common intersection of all three is empty. It's not possible to find a point that satisfies all three constraints at the same time. So, in this case, the linear programming algorithm should tell you, "Sorry, there's no solution". Okay. There's some more options. It could also be that the solution is unbounded. What does it mean? Well, suppose the feasible region looks like this and now our objective function tells us that we want to find let's say the highest point. Okay so, in this example let's say we want to maximize X_2. Well, as you can see, you can make X_2 bigger and bigger and still stay within the feasible region. So in this case, there is a solution actually there are infinitely many solutions of course, but there is no solution that gives us the optimal value because you can always improve a little bit. In this case what the linear programming algorithm should do it should tell you that okay the solution is unbounded and maybe in this along this particular line, you can continue all the way to infinity and keep on improving your solution. So, there's one more type of solutions that you can have and that's where the solution is bounded, so you cannot go all the way to infinity and keep on improving the solution is bounded but it is not unique. So, that's what you see in this example where well, if I want to find let's say the lowest point so my objective function will be to minimize X_2 but I have a constraint which is horizontal. Then all points on the boundary of this constraint would have the same value for the objective function. In this case, usually it's sufficient if your linear programming algorithm just gives you one of the points that optimizes your objective function. Okay. So, let's summarize. So, here you see linear programming in a slightly more general form. Okay? So now, instead of having two variables, let's say we have n variables. Typically in applications of linear programming the number of variables is large, so if n variables, let's call them X_1 to X_n and I want to find an optimal value for these variables so something that optimizes the objective function. So, the whole linear program is specified by well, you have to know the number of variables but then you have all these constants that define the objective function and the constraints. Okay so, these C_i's and also the A_i j's are simply constants that are part of the input. Okay and then given these C_i's the A_i j's and the B_i's then we want to maximize or minimize some linear function, a linear function of these variables such that a set of constraints is satisfied. Like I said in practice typically what happens is that, the number of variables is very large and also the number of constraints is very large. Okay. Well, what I did here for the constraints I wrote some linear function is at most some constant. You could also write some linear function is at least some constant, of course you can easily convert one to the other by just switching signs and you could also write some linear function is equal to some constant. Again if you would just allow at most, then equality you can represent by two different constraints that together determine that it's equality. Okay. Well, the previous linear program we could also write in a somewhat more concise notation, by saying well, we have a number of vectors so the input is a vector C, the cost factor, a matrix a and the vector b and this matrix a and the vector b together determine the constraints and now a linear programming algorithm should read this vector c, the matrix a, the vector b and then compute values for the vector X for your variables such that the objective function is optimized and all the constraints are met. Of course as we saw it can also happen that the program actually asked to report that it is not possible that the linear program is infeasible or maybe that it's unbounded. So what is important for us is not how these linear programs are solved but that they can be solved efficiently. So, the only thing we need to know is this particular theorem saying, if you have a linear program, then you can solve it in polynomial time. Where does a lot of details for instance that here the input size is measured in terms of the number of bits but again for us the only thing that we need to know is that okay, if we can represent something as a linear program, then we're in business because we can solve it efficiently. The next lesson we're going to see how to use this to actually develop approximation algorithms.