All of the layers have names, so you can look up the name of the last layer that you want to use. If you inspect the summary, you'll see that the bottom layers have convolved to 3 by 3. But I want to use something with a little more information. So I moved up the model description to find mixed7, which is the output of a lot of convolution that are 7 by 7. You don't have to use this layer and is fun to experiment with others. But with this code, I'm going to grab that layer from inception and take it to output. So now we'll define our new model, taking the output from the inception model's mixed7 layer, which we had called last_ouput. This should look exactly like the dense models that you created way back at the start of this course. The code is a little different, but this is just a different way of using the layers API. You start by flattening the input, which just happens to be the output from inception. And then add a Dense hidden layer. And then your output layer which has just one neuron activated by a sigmoid to classify between two items. You can then create a model using the Model abstract class. And passing at the input and the layers definition that you've just created. And then you compile it as before with an optimizer and a loss function and the metrics that you want to collect. I won't go into all the codes to download cats versus dogs again, it's in the workbook if you want to use it. But as before you're going to augment the images with the image generator. Then, as before, we can get our training data from the generator by flowing from the specified directory and going through all the augmentations. And now we can train as before with model.fit_generator. I'm going to run it for 100 epochs. What's interesting if you do this, is that you end up with another but a different overfitting situation. Here is the graph of the accuracy of training versus validation. As you can see, while it started out well, the validation is diverging away from the training in a really bad way. So, how do we fix this? We'll take a look at that in the next lesson.