All right, now that we've talked about depth for search and breadth for search, we're ready to probe our understanding of these concepts with the concept challenge. And so as you might remember with these concept challenge, we invite you to pause and try to solve a problem yourself. Then discuss with other learners around you if you can. Watch the UC San Diego learners discuss the same problem. Try the question again, perhaps with a little bit more confidence, and then confirm your understanding as we come back together and explain some of the key concepts involved. So for this problem what we'd like to do is think about comparing these two algorithms we have for graph search, namely depth first search and breadth first search. And as you remember from the core videos that Christina presented, these algorithms as pseudocode are very, very similar, but what's really distinguishing them in terms of just the code is the basic data structure that we use in order to keep track of what we've visited and what we still need to do. But what we'll see is how this impacts performance. And so the question I invite you to think about is which algorithm, depth first search, breadth first search, is asymptotically faster in the worst case? And so when you're answering this question think about the asymptotic notation that we've talked about in the previous courses with big-O analysis. >> Hi, I am Unger. >> Hi, I'm Kevin. Hi, I'm Annie. >> So, I think that depth first search would be better because, let's say there's a graph and it's very deep but it's not broad enough. So if I have to find the path from the road to, at the very end, then I can directly reach depth first search. So think that depth first search would be faster. >> Okay, but what about breadth first search in which there's a lot of nodes and they're really spread out. Wouldn't breath first search be faster in that scenario? >> Well I think just looking at the code for breadth first search and depth first search, it all looks pretty similar. I mean, it looks like the only difference is that depth first search uses a stack data structure and breadth first search uses a queue. And I guess from what you were saying it just depends on what the worst case actually is, right? So what is the worst case? >> So let's say the two nodes are not connected and we are trying to find the path and then I think that would be the worst case. >> Yeah, so in such case it looks like, since we would have to visit every single node that's unvisited in the graph, then wouldn't it be very similar in terms of run time? >> Yeah, so- >> I agree >> All right, now let's think through this question together. Let's think about, in particular, what the worst case analysis should look like. And so we have to think about what our algorithm will do and in the very worst case. So worst case in when we have to do the most work. And we can think about starting, and then we have to visit more and more and more nodes. And we'd like to get to the goal, and when will that happen in the worst case? Well it's when we've explored everything else that was possible and we've gone through all the dead ends that we had and we still need to keep on going further until we get to the goal. And so if we think about how many nodes we have to visit until we get to the goal, in the worst case we're gonna have to visit every single one of the nodes before we get to the goal. And so the number of nodes in each one of these algorithms we'll visit will be size of the vertex set minus one, which is order the size of the vertex set. Now, is that it? Is that just the run time that we have to think about? And when we think back to the algorithm, in either case we definitely do something for every node that we visit, but we also as part of our algorithm have to keep track of the edges that are possible to us from that particular node that we are visiting or that we are exploring at that particular moment. And so what we end up doing is doing some work for each of the current node's unvisited neighbors, which means going through all possible edges. And so not only do we have to think about which nodes are visited, we also have to think about which edges are traversed by this algorithm. And in the worst case we traverse every single one of them. And so taking the fact that we have to do some constant amount of work for traversing each edge in the graph, we see that our run time will have a contribution that corresponds to O of the number of edges in the graph as well. And so putting that together, the worst case run time for both depth first search and breadth first search is gonna be O of the number of vertices plus the number of edges. Now you might stop there and say, well hold on a second. I know my asymptotics and I know that if the number of edges is square of the number of vertices, that's going to be the dominant term, the number of edges might be the dominant term. And I can just drop the number of vertices and say, worst case running time is just O of the number of edges. It's useful to us in our analysis of an algorithm to keep apart this notion of the number of vertices, the number of edges, how they impact performance, and that way we can tune our analysis to the actual graphs that we're working with. So for example, in the graph that you're working with for the project, you might notice that the number of edges that we have isn't anywhere near quadratic in the the number of vertices, it's really linear in the number of vertices. And so in that situation, the performance that we have for depth first search and breadth first search is really going to be linear in the size of the graph rather than quadratic.