In the previous video, we defined the multistage game with perfect information. Now, what we need to do is we need to define some optimality principle for that and we will start with Nash equilibrium. So, the Nash equilibrium is a strategy profile such that for each player, the individual deviation for the strategy profile is not beneficial. Or it means that the payoff of player i, on the strategy in the Nash equilibrium is more or equal to the payoff that he will receive if he deviates, and that is true for all of the players. It is important to note that, in any finite game in normal form, there exists a Nash equilibrium in mixed strategies. As we define the multistage game with perfect information as the game in normal form, then we can say that there exists a Nash equilibrium in mixed strategies. Let's go back to our example, and on the slide you can see the Nash equilibrium and corresponding payoffs. Also on the graph, you can see the highlighted path which corresponds to the Nash equilibrium. For the multi-stage games with perfect information, we can define a more stronger version of Nash equilibrium called subgame-perfect. Before we do that, we need to define the notion of truncation of player i strategy. Truncation of player i strategy in the subgame, Gamma_z, is the same strategy which is defined in the initial game, but defined for the vertices from the subgame Gamma_z. So, the Nash equilibrium in the game Gamma is called subgame-perfect, if for any subgame of the initial game, the truncation of the Nash equilibria, will be the Nash equilibria in the subgame. It can be proved that in any multistage game with perfect information on the finite graph tree exists a subgame-perfect in pure strategies. Why subgame-perfect is better than the Nash equilibrium? The Nash equilibrium is Nash equilibrium in the initial game model, but if we consider some subgame. Let's construct subgame-perfect for our particular game model. In order to do that, we need to consider all truncated subgames with length one. These are the subgames starting at the position x(1.3), x(1.6), and x(1.7), x(2.2), x(2.3), and x(2.7). When we consider a subgame with length one, it means that we consider a game of one player, and for this class of games, Nash equilibrium is the same as optimization problem for one player. So, what do we do in each subgame with length one, is we choose the alternative that maximizes the payoff of the player who makes a move in this particular subgame. For example, let's consider the subgame starting in vertex x(2.7), and here, player two makes a move, and he has four alternatives. If he wants to maximize his payoff, he can choose the alternative two or the alternative three because for both of them, he receives a payoff equal to eight. But, for example, we fix the alternative number two and we think that he will choose this particular strategy. Then, on the next step, we need to consider all subgames with length two. These are the subgame starting at the vertices, x(1.8), x(2.5), x(2.6). Let's consider the subgame starting at a vertex x(1.8). Here, we need to do the same thing, we need to define the alternative for player who makes a move in the vertex x(1.8). In here, player one makes a move, he has two alternatives. On the first alternative, he will receive a payoff equal to two, on the second alternative, given that we know the result from this previous step, he would receive payoff equal to one. So, as a result in the vertex x(1.8) player one would choose the alternative one. But let's check whether the strategy profile that we obtained when player one chooses the alternative one and player two in the vertex x(2.7) chooses the alternative two is actually the Nash equilibrium in this subgame. So, the player one in the vertex x(1.8) has two alternative. For example, he chooses the alternative two, then according to our strategy profile, he would receive payoff one, which is less than the payoff for the alternative one. Then, for player one, the condition holds. Then, let's check the condition for the player two, in position x(2.7), he can choose any from four alternatives. For example, he chooses the alternative four, it doesn't matter. But we see that in vertex x(1.8), player one chooses the alternative one. It means that for any case, player two would receive payoff equal to three. So, it does not matter which alternative he would choose in the vertex x(2.7), anyway, he would receive payoff equal to three. This means that the condition for the player two also holds and the strategy profile that we constructed is indeed a Nash equilibrium, and the same thing we can do for any other subgames with length two. Now, on the next step, we need to consider all subgames with length three, and these are the subgames starting in the vertices x(1.4), x(1.5) and x(2.4). Let's consider this subgame starting at a vertex x(1.4) and here player one makes a move. He has three alternatives, if he chooses the alternative one, then he receive payoff equal to two. If he chooses the alternative three, then he receive payoff equal to minus one. If he chooses the alternative two, then according to the result that we obtained in the previous step, we know that he would receive a payoff equal to six, then of course, he would choose the alternative two, and the strategy profile that was composed from on the previous steps and on this step is Nash equilibrium. Yeah, let's check that. So, again, we can check the condition for the first player for the vertex x(1.4), but it is obvious. Then, we can check the condition for player two for the vertex x(2.5). So, if player two here chooses the alternative one, then he would receive payoff equal to two. But in the strategy profile or in Nash equilibrium, he would receive the same payoff. So, he does not have any incentive to deviate, so the condition holds. Now, let's consider the vertex x(1.6), here player one makes a move, he has three alternatives, and obviously the alternative three gives him the highest payoff, so the condition holds. So, again, the composed strategy profile is indeed Nash equilibrium. Now, again, we need to consider all subgames with length four and we have only two of them here which are the subgames which start at the vertices x(1.2) and x(2.1). For the subgame starting in the vertex x(2.1), player two makes a move and he has three alternatives here, one, two, three, for the alternative one, he would receive payoff equal to 10. For the alternative two, he would receive payoff equal to two, and for the alternative three he would receive payoff equal to four. So, obvious solution for him would be to choose the alternative one. We can also check whether this is the Nash equilibrium in this subgame, but it is easy to check. On the last step, we need to check all the subgames with length five. But since the length of our initial game is equal to five, then of course, this subgame is actually equal to the initial game model. What we need to do is we need to define the alternative for the player who makes a move in the vertex x_0 or x(1.1), which is the first player. If he chooses the alternative one, then he would receive payoff equal to five. If he chooses the alternative two, he would receive payoff equal to three. If he would choose the alternative three, then he would receive less payoff. Then, of course, he would choose the alternative one, and the strategy profile which corresponds to the highlighted path on the graph is Nash equilibrium. But also by construction, it is the subgame-perfect because in each subgame, we define the Nash equilibrium. On this slide, you can see a list of references where you could find the proof for the theorem for subgame-perfect and you could find more examples for that.