Here's the transfer learning workbook from this lesson. Let's step through it, and when we're done, you can try it for yourself. The first cell downloads the weights for a pre-trained inception network, and then instantiates a new instance of it using those weights. We will pull one of the convolutional layers as our input layer, and then take its output. We call this last output. Now, we'll set up our model taking the last output as the input to it. That'll be flattened, and then there'll be a dense layer, a dropout, and an output layer. The next cell will download the abbreviated version of cats versus dogs, unzip it into training and validation directories, and then set up the image generators. The training one will use augmentation as we've explored before. We can then see that the images are being loaded and segregated into classes correctly, 2,000 for training, 1,000 for validation. We'll now start the training. I'm only going to do 20 epochs. Keep an eye on the accuracy and validation accuracy metrics. I'm speeding up the video to save a little time. But as you can see, the training accuracy is steadily increasing, and the validation accuracy is settling in about the mid 90's. By the time we're done, the training accuracy is around 90 percent, and the validation is close to 97 percent. That's in pretty good shape. So let's plot the 20 epochs and we can see that the curves are in sync. This is a good sign that we're avoiding overfitting. So that's it for this lesson. In this and in the last few lessons, we spent a lot of time looking at convolutional neural networks for classifying binary values. Of course another scenario happens when you have to classify multiple objects. So in the next lesson, we'll look at what you have to do to achieve that.