0:00

Okay. So welcome back.

Â MIP part two, and so what are we going to do today is look at modeling techniques

Â for MIP. Okay.

Â MIP models and so in a sense, I want to go back to what is a good MIP model.

Â We already touched upon it. Okay but, I want to go into more details,

Â and then I'm going to tell you about some basic modeling techniques that I use in

Â MIP. And the reason that I want to talk about

Â that is there are things typically that we want to model, but they are not linear

Â constraints. So, how we are going to model them as

Â linear constraints is interesting, okay? So, so, I told you before that, you know,

Â essentially we are using branch and bound here for solving a MIP.

Â And a lot of the techniques for solving MIPs are going to be based on, you know

Â refinements on branch and bound. And we'll talk about that in the next

Â couple of lectures. But essentially, the key idea is branch

Â and bound, and we need to prune sub-optimal solution.

Â And a necessary condition for everything in effective branching model is that

Â their relaxation has to be precise. It has to be, a really good approximation

Â of the value of the, MIP model optimality.

Â Okay, so we want to have that. Okay?

Â So, so a good model, MIP model is going to be a model with a good linear

Â relaxations. And therefore when we are building this

Â model, in a sense we have to take into account that, behind the scene we

Â going to serve it to the linear relaxation.

Â And we want these linear relaxation to be strong.

Â Okay? So, in warehouse location, we had this

Â model, right? So and I told you, you know, this was one

Â of the models that we could have written, okay?

Â And they may be many, okay? So once again, you know we have to open

Â these warehouses, this side, which, which, which, which warehouses to open.

Â And then which ware, which warehouse would serve which customer.

Â Okay and so in a sense one of the so assume that we stick to the decision

Â variables. We like them but no we want to see hmm,

Â is there another way to express these constraints.

Â Okay so and we going to focus on the first one.

Â So we have a warehouse okay, and we know that that warehouse can only serve a

Â customers when its open otherwise it cannot, okay?

Â And remember what we did was basically having these very simple inequalities but

Â many of them right? Stating that you know you know YWC which

Â is basically saying warehouse W can serve customer C is one only if you know

Â warehouse W is open. Okay, so we have these linear

Â inequalities making sure that even warehouse is closed nobody, no customers

Â can be assigned to it. And if it's open, then everybody can be

Â assigned to it, okay? And so, essentially, that's what we did.

Â And, of course, we have to have many of these constraints, right?

Â As many as there are pairs of warehouse customers, okay?

Â So that's what we used last time. Can we do, can we find a formulation,

Â let's say, which is more concise? Okay, can we do that?

Â Okay? So we going to look at this and try to

Â replace all these constraints with fewer of them.

Â Okay? And what we going to do is essentially

Â what we want to do is for a particular for a particular warehouse.

Â Okay? Can we for a particular warehouse can we

Â actually deal with all these constraints in one step.

Â Okay? And so this is what we want to do.

Â Okay? And this is how the constraints is

Â going to be expressed. Okay?

Â So, what we are going to do, is we are basically going to sum all the customer

Â which can be assigned to a warehouse. Okay.

Â So, we take the warehouse, we look at all the customers that can be assigned and we

Â sum this variable, y, w, c. Okay.

Â And then we want to make sure that these guys become, become 0.

Â They become 0 if the warehouse is not open.

Â Okay and that's easy to do. You just put Xw okay.

Â On the right hand of these equations and if it's 0 then of use to sum you know

Â will be 0 and then all the elements of the sum are going to be 0 okay?.

Â So thats fine we can do that. But then we multiple this thing by the

Â number of customers to make sure that these guys can be assigned.

Â So in the worst case every one of them is going to be assigned to particular

Â warehouse. So if we multiply them by number of

Â customers we make sure that this constraint is doing exactly what we want.

Â Okay? So if the warehouse is open then

Â essentially all the customers can be assigned to it.

Â If the warehouse is closed then all of these YWC variables are going to be 0.

Â So these constraints Is kind of magically right?

Â okay. So it can in 1 step you know take care of

Â all the small linear equalities that we had okay.

Â And this becomes my model now essentially I have one of these constraints here okay

Â for every one of the warehouse. We don't have to pair customer warehouse.

Â This is essentially expressed by the sum over there.

Â Okay, so I have many fewer constraints in send but they are bigger okay.

Â Okay. So now ha!

Â I have two models, okay, and I have to decide which one is the best one, okay.

Â How do I do that? How do I decide that this model is better

Â than the other one, okay? Now we have one indication here, we are

Â saying oh yeah, but you know I have fewer constraints, so maybe this is good, okay.

Â But I told you last time, okay, a good model is a model which has a good linear

Â relaxation. Okay?

Â So what about the quality of the linear relaxation of this model compared to the

Â quality of the linear relaxation of the model that I've shown you last time?

Â Okay? So you kind of need to relate these two,

Â and so if one is a stronger relaxation, that's probably the one that you want to

Â use, unless it is a really huge number of constraints, right?

Â So, and what I'm going to show you is that the model is that I've shown in the

Â first lecture is going to have a stronger relaxation.

Â Now the model that I've just shown you why?

Â Because look at these guys okay. So if I have a solution okay?

Â If I have a solution to all these constraints okay.

Â So these guys are going to 0 and 1. Okay, I can sum all these guys right.

Â I can sum all these linear inequalities for a particle of W.

Â Okay for a particle of W I can take all these guys and sum them.

Â And if I sum them, what do I get? I get exactly that constraint.

Â 5:33

So, in a sense, whatever the solution to the linear relaxation of these

Â constraints. When I sum them, they're going to satisfy

Â these constraints. So which basically means that this this,

Â any solutions to this is going to be a solution to that.

Â Okay. So, I know that this one cannot be

Â stronger than that one. It cannot be stronger linear relaxation

Â than that one. And actually, it can be a weaker

Â relaxation than this one. Okay.

Â So, can you think of a case where it's going to be a weaker relaxation?

Â Okay? So what is the basic difference between

Â these two? Okay?

Â So what we know here is that every one of the x, the, the y w c is always going to

Â be smaller than x w c, okay? And in the linear relaxation, so, so

Â that's the big step here. You have to think that.

Â The linear relaxation is not always going to be, you know, associating an integer

Â value to Xw or Ywc, okay? So, if these two things would be the same

Â obviously these variables would be 0 and 1.

Â Okay? But in the linear relaxation that value

Â might be point five, right? So what could happen in this particular

Â case? So when you look at these constraints.

Â You know, this guy is always going to be making sure that ywc is smaller than xw.

Â Okay. Is it the case here?

Â Think about it. Is it the case that I would have this

Â property that ywc is always going to be smaller than xw?

Â Okay. If you answer that question, you will

Â know if this is at least as strong as that.

Â We already know that this is at least as strong as this, okay.

Â And in practice you know, it's, it's easy to find a case where this is actually

Â weaker than that, okay? So, the initial model has a better linear

Â relaxation. It's going to be very likely a better MIP

Â model than the other one. So let me show you a very interesting

Â example of these two linear relaxation. So this our bench mark additional/ g

Â bench mark for warehouse location they are small.

Â But it is very interesting to look at this two linear relaxation, four- four is

Â going to be our bench mark. So, remember we are minimizing the cost,

Â right. So we want these lower boundaries, this

Â linear relaxation to be as high as possible.

Â What you see on this column, okay, is the first lin -MIP model that I have shown

Â you. Okay.

Â And it's linear relaxation. And you see the numbers over there,

Â right? And this is the second model.

Â Okay, so when we aggregate essentially the constraint.

Â Okay, so what you can see here is that the value of the linear relaxation here

Â is much weaker. Okay, so and it's substantial, right?

Â So we have gaps which are between 9.5 and 20% difference between these two things,

Â okay. So this is the difference between these

Â two models and their linear relaxation. You see that in one case, the case where

Â you basically have these aggregated, all these constraints.

Â They're kind of, four linear inequalities, but many of them, and when

Â we have only one for a particular warehouse.

Â You see that the difference in the relaxation is substantial, okay?

Â So 20% difference is huge, right? So you are 20% closer to the optimal

Â value. You're going to prune much more, okay?

Â And so this is interesting, okay, so when you have MIP model, you can look at this

Â relaxation, how good is this linear relaxation?

Â And it's going to tell you a lot about the pruning capability of your inner

Â relaxation and your branch and bone algorithm.

Â Okay? So, once again, you know, typical,

Â typical things you can do. Look at the relaxation, how good is it?

Â Can you prove [UNKNOWN] properties on them.

Â Can you look and practice how it does? So, essentially what you want is for

Â pruning effectively you need a very strong linear relaxation and a good

Â model. A good model is going to be a MIP model

Â with a very good linear relaxation. Okay.

Â So, let's look at another example now. We are going to look at the map coloring

Â example that we have seen, you know, earlier in this class.

Â Okay. And the reason we are going to use this

Â is for showing some of the modeling techniques of MIP and various modelings

Â and how you know, they impact the linear relaxation, okay?

Â So this is the, this is the coloring problem as, as we stated it in, in, in

Â constraint programming, okay? So we were maximizing, we were

Â minimizing. We're, minimizing the maximum color.

Â Okay? So, min max.

Â We want to minimize the number of colors. And then we have all these constraints,

Â which we're basically saying that two adjacent countries have to be given

Â different colors okay? And when you look at this guy, okay.

Â So this non-equal.. Okay.

Â Well that's not a linear constraint. Okay.

Â So you can't express it directly as a MIP.

Â Okay. So you have to reformulate it.

Â And the question is how are we going to do that.

Â How can we actually represent this, as one or more linear constraints.

Â 9:51

Okay? So, essentially, x is different from y

Â when? When, you know, x is smaller than y, or,

Â you know, x is greater than y, okay? So this, and these are two linear

Â constraints. Okay, so you can express that as saying,

Â x is smaller or equal to y minus 1, or, x is greater or equal to y plus 1.

Â You know, both of these disjuncts, okay, they are linear constraints.

Â Of course we have a disjunction here and that's annoying right?

Â Because the MIP model doesn't support dis junctions right?

Â So that we have two liner constraints that are in this junction.

Â Can we actually you know use these two linear constraints but remove this dis

Â junction? Okay?

Â So I'm going to show you the transformation which is called the Big M

Â transformation. Which is used all over the place in MIP

Â models in general when you have things like dis junctions.

Â Okay? And so the key idea is to do two things.

Â You're going to introduce a zero one variable, which is called b, and we are

Â going to introduce a very large number. Okay?

Â A huge number. You know, take, you know, 40 billions,

Â okay? And so now we can realize, essentially,

Â the junction as two linear inequalities, okay?

Â And the first one is going to be what? It's going to be x, okay?

Â It's going to be equal to y minus 1 Plus B times this Big M, this big number.

Â Okay B is a 0 1 variable. It takes a value of 0 1.

Â And the second constraint is X is going to be greater than Y plus 1 minus 1

Â minus B you know times the big M okay. So essentially what I'm showing you here

Â is a technique to transform a dis junction of two linear constraints into a

Â conjunction of two linear constraints. And the trick is to introduce one more

Â binary variable. And then use this big M value, okay?

Â Now I'm going to give you the intuition of what's happening here, right?

Â So, essentially, when you look at these two constraints, you know, every one of

Â them at any one, one of them, when b is equal to either one.

Â One of them is going to implement one of the part of the disjuncts, okay?

Â So if, if you look, if you look at the case where b is equal to 1, what's

Â going to happen if b is equal to 1. Look at that constraint here.

Â Okay? You know, b times M is going to be a huge

Â number. Okay?

Â So that basically means that the constraint here is that x is going to be

Â smaller to, something related to y, that we really don't care.

Â Because we have B times this big number. So, the first constraint is going to be

Â always satisfied, okay. So, the really important thing is the

Â second constraints and now we have that B is equal to 1, right?

Â So, 1 minus B is 0. 1 over B times M is also 0.

Â So we have the constraint that x has to be greater or equal to y plus one.

Â Okay. So, in a sense, when b is equal to one

Â the right part of the dejunct has to be true.

Â We want to be in the case where x is greater than y.

Â Okay, and of course in that case it's different from y, okay?

Â And then the case where b is equal to 0 is just the opposite, right?

Â So when b is equal to 0, 1 minus b is 1 times this big number.

Â That's a big number, okay. We are subtracting that big, big number

Â from some value that y has, okay? So, that makes that, that right hand side

Â tiny, tiny, tiny. And of course x is going to be greater

Â than that always, okay? So what we are enforcing in that

Â particular case is the first constraint. B is equal to zero, so we have basically

Â x is smaller or equal to y minus one, so which basically means x is smaller than

Â y, okay? So in a sense this Big-M transformation

Â using this value, this Boolean value b and Big-M is basically going to make sure

Â that we can transform this b junction into a conjunction.

Â And essentially x different from y, it can be represented as.

Â X is smaller than y, or x is greater than y.

Â Okay? So now you have to think about what the

Â linear relaxation is going to do. And I told you, this linear relaxation is

Â a beast. It's evil.

Â Okay? It's always going to do what you don't

Â want it be done, that, that you don't want it to do.

Â Okay? So if you have the linear relation, so

Â you have to put yourself in the mind of this linear relaxation.

Â This nasty beast, right? So if you want to make sure that the

Â linear relaxation is as low as possible, okay, it's, it's satisfying as many

Â constraints as possible, what are you going to do?

Â I want to satisfy this constraint and I want to satisfy that constraint and put

Â as many constraints on x and y. So that I can have you know, the best

Â relaxation, the, while, so that I can keep these guys, you know, free.

Â Okay, I don't use the constraints of these guys.

Â What you are going to do is that, you are going to choose b is equal 0.5, okay?

Â So if you choose b is equal to 0.5, okay? Half of a big number is still a big

Â number. Half of that big number is a big number,

Â so essentially, these two constraints are always going to be satisfied, right?

Â Always! And so what happens is that it's like you

Â remove these constraints from the, you know, the linear relaxation.

Â So your linear relaxation is going to assume that it can minimize, you know,

Â more and more and more and more, okay? So you will have a big gap between the

Â MIP model and the linear relaxation. Okay?

Â So in a sense this relaxation here, you know is going to be really clever and try

Â to use the value least that you minimize the value of the linear program.

Â Okay? And therefore, you know, in this

Â particular case it's basically going to ignore these two constraints okay, which

Â is the worst that can happen right? So we have this beautiful constraint that

Â we want to actually get a very strong relaxation.

Â And actually the relaxation, r, r, r, r, r, r, going to go down, down, down, down.

Â Okay? So, so when you look at this particular,

Â you know, model with the big M, this is, this is, the model that, that, that we

Â have just described, so we minimize the objective function.

Â Okay. That's the number of color.

Â We make sure that the objective is greater than all the colors that we are

Â using. Okay.

Â So, this is the color of every variable. Okay.

Â So we make sure that the objective function is at least as large as the

Â largest color we assign. Okay.

Â And then we re-express all of the constraints that you know in our two

Â adjacent countries. Have to have different values by using

Â this big m transformation that I have shown you, okay?

Â For every two countries that are adjacent, okay?

Â And so this is essentially a model that we can use, using this Big-M

Â transformation, okay? Now, I alluded to in the previous slide,

Â okay? That this relaxation is probably going to

Â be poor, what does it do when we try to solve it?

Â Okay, this is explaining what I just told you, right?

Â So the objective is greater than the largest color, and, you know, no two

Â edges and countries received the same color, okay?

Â So this is the value of the linear relaxation, okay, at the root node, okay?

Â And so it's a very, very powerful relaxation, right?

Â So you see that the objective is 0, okay? And you know, you see that, you know, the

Â value of b is 0.25 on some of the, on some of the, on many of the constraints.

Â Okay? So what this basically telling you is

Â that we need at least one color. Yeah, great.

Â Okay? So that's a very powerful relaxation.

Â Okay. I'm trying to color this map and what

Â this linear relaxation is telling you yes, yes, yes, you need at least one

Â color. Thank you.

Â Thank you. Okay.

Â So what basically this tells you is that this linear relaxation is terrible,

Â right? This big M notation is in part

Â responsible to that. So, you can use this transformation.

Â But that doesn't mean that you're going to get a good model okay.

Â So the MIP when you try to solve it is going to use 65 nodes okay.

Â Find the optimal solution five nodes but that really depends how you branch right.

Â So can we find another model okay. So we had this model but of using the

Â linear relaxation is so bad that it tells you oh you need at least one color.

Â Okay, Can we find another way of modeling this is MIP which is actually better?

Â Okay. And so you have to remember what I told

Â you last time. Okay.

Â So MIP model like 0/1 variable. They are in love with 0/1 variables.

Â Okay. So can we find a model with 0/1

Â variables? So what we're going to do is take all the

Â variables in this model. All the colors.

Â Okay? And we're going to binarize that, okay?

Â What does that mean? That means that instead of assigning a

Â color to a country, we're going to go use a variety of binary variables.

Â Which are basically going to decide whether that color is assigned to that

Â country, and we're going to do that for all the colors.

Â Okay, is this country red? Is this country blue?

Â Is this country green? Is this country white?

Â Okay, so,so we basically, in this particular case, we know that it's a map,

Â so we need a, at most four color. So we left four binary variables, the

Â first one is going to, do, say. Oh!

Â Is you know, x, you know, equal to 0. Is x equal to 1, is x equal to 2, is x

Â equal to 3. Okay, and so basically the value so when

Â I use the variable Bx equal 1 or equal 1. That will mean that B you know X equal Y

Â is 1 whenever you know X is equal to I. Okay, so basically what I have is that

Â value of these the value of this variables is basically expression oh!

Â What is the value that I'm assigning to that variable okay?

Â And so now essentially if you use these binary variables.

Â So there is one more thing that you need to do.

Â You need to make sure that the popular variable is actually being assigned a

Â popular value. So you have to basically say that a sum

Â of these binary variables is equal to 1. If you have that what are you saying, you

Â are saying, okay? you are making full decision.

Â Is x equal to 0 or not. Is you know x equal to 1 or not.

Â Is x equal to 2 or not, is x equal to 3 or not.

Â And you want also to say that you have to make one of these decisions exactly.

Â One of these things variables has to be true, which basically means I have to

Â assign a color or value to that particular variable.

Â So, summarizing here what is you take an integer value, the variable has to take

Â an integer value you know the range of that variable.

Â And what you do is you replace that integer value by plenty of binary

Â variables. Looking at every one of these variable,

Â values. Okay.

Â And, and having a decision variable which tells you, ooh, is that variable taking

Â that value. Okay?

Â That's what we are doing here. Okay.

Â So in the coloring example is, you know, is this country color red?, is this

Â country color blue?, and so on, and so forth.

Â Okay. And now when you have a constraint like X

Â is different from Y, you know, this country has to be colored by, you know,

Â differently than X. The country X has to be colored

Â differently from country Y. Okay?

Â So, we can replace that by a set of beautiful linear inequalities.

Â Okay? So, what are we saying here?

Â Ooh, I don't want x to be 0 and y to be 0 at the same time.

Â Okay? That's a very simple constraint here.

Â Okay? So instead of this Big-M notation, what

Â you're saying, ooh! I don't want x to be 0 and y to be 0 at

Â the same time, okay, they have to be smaller or equal to 1.

Â I don't want it to be, I don't want it to be 2.

Â I don't want this sum to be 2. And then you do the same thing for value

Â 1, okay, I don't want x and y to be 1 at the same time, otherwise they are equal

Â and I want them to be different. Okay, same thing for 2, same thing for 3.

Â Okay, so now, essentially, as soon as I have the 0/1 variable it's really easy to

Â express the constrains and that's always the case in this MIP model.

Â As soon as you have 0/1 all the things are becoming much simpler okay.

Â And then I get this beautiful model now. So it's long so I have to disappear from

Â the screen right. But I'm still here, I'm still here.

Â Okay so what is this doing is basically looking at the object fucntion first.

Â Okay so this is the tricky part okay. So you have to make sure now.

Â That the objective is greater than the largest color.

Â But we don't assign color to the, to the countries.

Â So what we have, the only decision variables that we're going to have is for

Â every, every country, and every color, is that country color with that color?

Â Okay? And that's a 0, 1 variable.

Â So to make sure that the objective font to make sure, to find out the minimum

Â value for the objective, we are going to look at all these decision variables,

Â right? But we also going to multiply them by the

Â value that we are actually that corresponds to that particular, you know,

Â binary variable. So we have a bit, so for instance, if

Â this is 3. We are basically saying the objective

Â function has to be greater than 3 times whether, you know, that particular

Â country c is assigned, you know, value 3, okay?

Â And we do that for every country and every color.

Â Okay? So basically, we are basically making

Â sure that the objective function is greater than the value that we assigned.

Â But we have to use this expression because we don't have the actual value of

Â the variables. We just have, you know, this kind of

Â binary decision yes or no, if that particular country is assigned to that

Â particular value. Okay?

Â Then we have these other constraints that we have seen previously, that basically

Â says that every country is to be colored with exactly one color.

Â Okay. So which basically means that essentially

Â one of these binary variables for a particular constraint's going to be 1,

Â okay? And then we have all these linear

Â inequalities, which are very simple, okay?

Â So we take, you know, every two constraints that are adjacent, okay, and

Â we take all the possible values. And then we said that, you know, the

Â color, the sum of the two colors, okay? The two binary variables corresponding to

Â the assignment of these two colors has to be smaller than or equal to 1.

Â Okay? They cannot be color with the same value,

Â okay? And that's the MIP model as well, okay?

Â Very different from the other one. We have very different decision

Â variables, very different constraints. All of them are different actually.

Â Okay. So how does this program behave?

Â Okay. beautiful.

Â This is what I told you before. Okay.

Â I let you see it because we did the slides.

Â So you have to appreciate it. Okay.

Â So so this is the linear relaxation. Okay.

Â So the linear relaxation is going to tell you 0.27.

Â Okay and seven, seven, seven, seven forever.

Â Right, so which basically means that this is really great, right?

Â So now, this linear relaxation instead of telling you that you need one color, is

Â going to tell you you need at least two colors, okay?

Â Wow, okay? And the you see the linear relaxation

Â vari, the, the linear relax- the, the values of the various decision variables

Â inside, inside the the linear relaxation. Okay so for one of the particular

Â countries these are the various values. Of course they're fraction are right so

Â we haven't yet to sign this is what the linear relaxation, it's going to always

Â try to find values, that minimize the objective.

Â Okay? And then you get these fractional values

Â all over the place, okay? So, in a sense, but in a sense this

Â model, right? Is much better than the previous one.

Â Okay? And we have now these values and we could

Â branch and we're going to branch on these values and so on.

Â Okay, and we are going to use in this particular case, only 12 nodes, and 20,

Â 22 for finding the optimal solution and 22 nodes for, you know, proving

Â optimality. Okay?

Â So, better relaxation in this particular case, and we have a better and we have a

Â smaller search read to explore. So, what is interseting here obviously is

Â the linear relaxation is improved, okay? which is what we wanted.

Â So, the 01 molar is much better than the big M molar, and, typically, when you can

Â avoid this big M it's a good thing to do. Okay?

Â So, now, can I actually do better than this?

Â Okay, so, so, look at this thing, okay? So, look at these two things that, that

Â we have discussed before. So, essentially what we are saying in

Â these particular constraints is that the objective has to be greater than v times.

Â Color you know CV for a particular country okay?

Â So which basically means though if, if, if you know if if the color of country C

Â is V then the objective has to be greater than V because this is a 0 1 variable.

Â OK so know what we know is well the second constraint is telling you is that

Â every country is assigned to a single value.

Â Right? So, so can we actually do better than

Â this? Okay, so we know that for instance, okay,

Â a top, a feasible solution, so all these colors, okay.

Â Exactly one will be a 1 and all the other ones are going to be 0.

Â For a particular country, you look at all the colors, all these binary variables

Â there are going to be 0 except one. Okay.

Â So can we restate the particular constraints here to make it stronger.

Â Okay, for the linear relaxation. Right?

Â It doesn't matter the optimality but for he linear relaxation, can it be better.

Â Okay? And so we can take these two constraints

Â and try to combine them to actually get a stronger constraint for you know, bonding

Â the objective value. And so what we can use is we can take all

Â the colors here. Right?

Â All the colors. Okay?

Â And multiplying them by the value V, that's essentially the right hand side of

Â the constraints that we had before. But now we sum them out.

Â Okay? Why can't we do it?

Â Because, as I told you, only one of them is going to be one.

Â And therefore, these are going to be exactly [INAUDIBLE] you know?

Â When you, when you have a feasible solution.

Â And therefore, you know? Eh, they will really corresponds to these

Â constraints. So essentially, instead of doing this

Â aggregation in the [INAUDIBLE] location, we are trying to aggregate it.

Â Yes but of course we are using this constraint to aggregate here.

Â OK? And so now we have the constraints and we

Â believe that this objective of course is going to be moving up.

Â Right? Okay?

Â No and the hope here is what? So, when we look at these relaxation

Â here, okay so when I show you the relaxation of this thing okay.

Â So what you see here is that ooh! You know, we had the objective greater or

Â equal to 0.5 because that was the co, the largest color, okay?

Â But if we sum all these guys for a particular color, right?

Â So they sum, they sum to one obviously. So we could say oh!

Â But then you know, may, you know, if I, if I multiply this one by 1, this one by

Â 2, and this one by 3, okay? I get a value which is greater than 0.5,

Â and that's why I want to aggregate this. Okay, so when I put these new constraints

Â here okay. So I believe that my linear relaxation is

Â going to be better okay. But, you know this linear relaxation it's

Â very unpredicatable it's like my saw you never which is going to do okay.

Â And so look at what is does okay so this is when we add this constraint instead of

Â the other one. Note the linear relaxation is 0.5.

Â Yes, it's better, okay not much better but better, okay?

Â But look it completely changed the value of the assignment, right?

Â So note the first and the two values are at 0.5, right?

Â And I remove, the linear relaxation, not me, right, so the linear relaxation has

Â removed the value for these two other, for these two other colors, okay?

Â So this is interesting, right? So instead of a fractional value for

Â these other ones now the linear relaxation found that you know to get the

Â minimum value I have to assign this guy to 0 and these guys to 2.5 everyone of

Â them. Why?

Â Because these guys are multiplied by two and by three so you want to keep them as

Â small as possible. Okay?

Â That's what the linear relaxation does to you.

Â He's always trying to find a way to get down down down.

Â Okay, we are minimizing right? So essentially what I've shown you here

Â is that in this particular case we can improve the relaxation but it always

Â change the value of the solution as well. Okay?

Â Now we need also at least two colors. What is interesting here is that when you

Â try, so this is, when you compare these two, you know, that this is, this is,

Â this is the two solutions that I showed you.

Â Right? When you try to solve this, this is what

Â you get. Okay?

Â So, you have nine nodes. Okay?

Â So you get to the optimal solution fine, faster with that, with the first

Â relaxation that I showed you for the binarized version of the, of the model.

Â But we need 41 nodes, so we need more nodes here, okay?

Â Though once again we have a stronger relaxation doesn't necessarily mean that

Â your search is going to be faster because you can branch differently.

Â We have a stronger relaxation in this particular case but it was solved, you

Â know, we use node actually. Okay, so, what I've shown you today, you

Â know, to summarize, is that when you look at the model, okay?

Â The different model, what you have to do is to try to find a model which has a

Â good relaxation. I've shown you different techniques,

Â okay, on how to actually represent constraint that are not you know directly

Â linear okay. And so you can binarize the variables you

Â can use the Big-M notation and I've shown you the influence of these things.

Â I've also shown you that how you express the linear constraints has an impacts on

Â the linearization and therefore on the efficiency of your algorithm okay.

Â So next we're going to look at better ways of actually strengthening these

Â relaxation. Okay?

Â And it's going to be fascinating. Okay.

Â Thank you.

Â