[MUSIC] Now what are we going to do with this quadratic program? What can we do with it? We have the quadratic relaxations for Maxcut where our variables are vectors. How is that, does that resemble linear programming relaxations? Let's compare the two. When you have a linear program, your variables are real numbers. When you have a vector program quadratic in our case, your variable are vectors. When you have the linear program, your objective is a linear function of the variables that you want to maximize or minimize. When you have a vector program, your objective is a linear function of what? Of the dot product between your vectors that you want to minimize or maximize. When you have a linear program, let's say you have equality constraints, linear constraints in the variables. When you have a vector program, let's say there are also equality constraints, their linear constraints in the dot product meeting the vectors. So that's what a vector program is, that's how it compares to a linear program. The variables are vectors and every linear function is a linear function of the dot products. What can we say about vector programs? What is the big deal? So, we have a general form of a vector program on the right hand side. And now we can define a variable why sub ij as the dot product of Vi dot Vj. So what is your objective? It can be rewritten as minimize or maximize a linear function of your variables Yij. Given linear constraints of these variables Yij and given the additional constraint that there must exist vectors Vi such that Yij = Vi dot Vj. So how can we express this constraint? This is all linear. It's all very similar to a linear program except for this strange constraint. There must exist Vis such that Yij is Vi dot product Vj. And of course your variables have double indexing. They are arranged in a matrix. Instead of just linearly X1, X2, Xn. But that's just notational. Well, this is where positive, semi-definite matrices come into play. Consider a matrix, a square matrix. Assume it's symmetric. Yij equals Yji. There exists vectors Vi such that Yij equals the i dot product Vj if and only if the matrix Y is positive semi-definite. So that's a characterization of positive semi-definite matrices. And so we can think of our relaxation as a condition under certain matrix Y being positive semi-definite. And by the way, there are other equivalent characterizations of positive semi-definite matrices. One of them being that for every vector a, if you look at a transpose times the matrix y times a, that is the sum over every ij of ai yij aj, that must be non-negative. This is another equivalent characterization of positive semi-definite matrices. Thus we can rewrite our relaxation of Maxcut as follows. We can say we want to minimize or maximize our linear objective. With linear constraints on the yij's with the symmetrical strengths and with the fact that a matrix Y must be positive semi-definite that is for every vector a, sum over ij of aiyijaj is greater than or equal to 0. Fix one vector A. Consider this. This is the sum of ai aj times yij. This is a linear function of the yijs. So what we have here is an infinite number of linear inequalities. Each of them defines a half space and therefore this is just a convex region of space in which we're trying to optimize this linear function. So we're trying to do convex optimization. Optimize a linear function over a convex region of space. The theorem is if we have something of a type, an optimization problem of that type, with the condition that Y is positive semi-definite. This can be solved in polynomial time. How? With an algorithm called the ellipsoid algorithm, the ellipsoid method. At least it was the first algorithm to solve this. Why did I put some quotation marks around solved? Because sometimes the optimal solution could be irrational. Sometimes the optimal solution could be something like square root of two. And you can never write square root of two exactly on your machine with the kind of representations we have in mind. And so what does this mean? It means you can get arbitrarily close. It means you can get to an additive error of epsilon. In time polynomial in log of one over epsilon, for arbitrary epsilon. So, you can get as close as you want to the optimal solution, so it's solved in the best possible sense. So, this kind of semi-definite programs as we call them can be "solved" in polynomial time. And so this gives us a new tool, a new way to find relaxations of our integer programs. And so, if we go back to our quadratic relaxation for Maxcut, the weighted sum of one minus vi dot product vj over two such that the vis are unit length vectors. This can be solved in polynomial time by the ellipsoid method. So we can take this, write this, the objective depends on the graph and on the weight of the edges. And then what do we get as a solution? We get for each vertex, we get a unit vector in some high dimensional space on the sphere and that is the solution of our quadratic relaxation for Maxcut. It is possible to get this and because it's a relaxation, this value will be. And so now just like with linear programming relaxations, what we need to do is figure out how to round a solution to get the partition of our graph.