[MUSIC] In the last video we saw that the traveling salesperson problem, is provably in the space of NP hard, which means it's likely very difficult to solve. But as computer scientists we often come into these problems. And one of the things I love most about being a computer scientist is that we don't stop there. We still need solutions to the problems. We just need to come up with some way to find a solution which maybe isn't the best but is reasonable and we do this all the time. And this introduces us to the notion of essentially approximations and heuristics. So, at the end of this video, you should be able to apply heuristics to come up with a reasonable solution to a hard problem. And you should be able to apply the 2-Opt Heuristic to the traveling salesperson problem. So again, at the end of the last video, we'd shown that the traveling salesperson problem, as an optimization problem, was in the space of NP-Hard. So, if we're trying to find that minimum distance, this is gonna be really difficult. But all we have to do is relax our constraints just a little bit, and we can actually work much better. And the way we're gonna relax is just to say, instead of finding the minimum distance, could we just find something that's reasonable. Now, definitions of reasonable can change, but for right now, let's just see if we can come up with a solution which will work for us. And that leads us to Heuristics and Approximation Algorithms. Now, here is Heuristics and Approximation Algorithms have a fair amount in common. The idea is essentially, you have some intuition about how you might find a decent path. It may not lead you to the best path, but it's gonna at least eliminate some of the bad paths out there. And sometimes you may include in this some kind of randomization, so you try whole bunch of different things, and then you kind of picked the best randomly. Now, the big distinction between Heuristics and Approximation Algorithms is just the bounds. So, Heuristics you're not really sure how well this is gonna do, where as with Approximation Algorithms you often tend to bound the Algorithm to say it actually can do at least this well. So, what we're gonna do, is look at a number of these, and if fact you've already looked at one. So, the Greedy algorithm that we talked about earlier, where we tried to just pick the next best choice as we worked through the traveling salesperson problem, was a Heuristic. This is a constructive Heuristic. So, let's talk about these different kinds of Heuristics. Well, the first category are constructions. And with constructions what you're trying to do is essentially, build a solution. So, you're gonna do something to construct the solution as you go along, and once you've got that solution you're done. So, with Nearest Neighbor or the Greedy Algorithm you're just constructing it by finding the closest neighbor. There are other approaches to essentially try to break down the graph into other pieces, and then reconstruct a solution from that. The other approach you can take is come up with a solution and then, iterate on it. So, with an iterative solution what you do is, take that initial solution and then, just keep improving on it. So, algorithms in that space are like k-opt or genetic algorithms, and what they is take a solution and just keep fixing it. Keep making it a little bit better, and a little bit better, and a little bit better. Until some point where they can't really make it better. Now, I do want to remind us that in both of these cases we aren't going to be guaranteed to find the best but it's gonna help us find a decent one. So, what I wanna do is give us an example of essentially of both of these. So, what we're gonna do is work through the greedy algorithm again, and apply the 2-Opt heuristic on top of that, to improve it. So, here's a basic graph. And I do want to say that these heuristics and these approximation algorithms are far more interesting with much, much larger data sets. But we're gonna work with a really small example, so we can see how this works. So, with the small example, we'll apply the Greedy algorithm to it, and what we'll do is find essentially the path, we'll start at A and we'll find the shortest path out of A, which is 5, which leads to B. The shortest path out of B is 6, which leads to C. Shortest path out of C is 5, which leads to D, and then, you only have one node left that you haven't visited, that's A, and you're gonna have to follow that path back to A. Now, this is gonna come up with the length of 28 which isn't the best, but also not the worst. And the idea here now with 2-Opt is to say I have a solution, can it make it better? And that's what we're gonna do with a 2-Opt. So the intuition between the 2-Opt Heuristic is to examine pair of edges, pairs of edges. Remove them, from your solution, and try to repair the solution in a different way, and see if your new solution is better. So, I'm gonna do two examples of this. So first, let's try to select the edges between AB and CD. Let's remove those edges, and try to repair our solution. We'll repair that solution by adding in AC and BD. So, what this will look like then, is essentially A to C, C to B, B to D, and then, D back to A. And if I check the length on that, it's gonna turn out to actually be worse, right? I'm taking out two edges that are cost 5, and I'm putting in an edge that's cost 6, and an edge that's cost 8. That's a mistake, this is worse, we're not gonna do this. So, I'm gonna essentially scratch out that approach, and go back to my original solution, and start working from there again. The next thing I'm gonna try to do remove edges AB and CD. And again, I'm gonna try fix it, by adding in edges AC and BD. And what I end up with then, is a path from A to B from D to C and then, from C to A. And this is actually a better path, it's length is 24 versus length of 28 for our original. So, what we would do then is recognize that this better and then, select the better solution. So, I'll get rid of my old solution, and I'll start working with this better solution. And then, I'll just keep doing this until I can't find any more improvements. So, this happened to find us the optimal solution. So Greedy + 2-Opt for this very simple example, worked, found it's optimal. That isn't gonna be true in the general case. So, if you have larger data sets, larger graphs, it's very possible that Greedy + 2-Opt will actually find a non optimal solution. But we like this because, although it doesn't find us the optimal it's actually pretty fast and it will come up with fairly effective solutions. So, what I encourage you to do then, is to step back and look at some of these other solutions. So, look at the different constructions out there, look at some of the iterative solutions out there. Look at some of these on your own just to see how they work. And I encourage you to think about what the limitations are and what the benefits are of trying these different approaches. But at high level what I want us to realize is essentially changing the problem itself can be incredibly powerful. So, when we say that our problem is having to find the exact minimum distance, that was really difficult. If we just relax this constraint to say well, I want to come up with an answer that's good but doesn't have to be the best, that allows us to just come up with an answer that's reasonable, and that's often good enough for what we're trying to do.