Welcome back. Transfer learning is one of the most important techniques of deep learning and TensorFlow lets you do that with just a handful of lines of codes. Rather than needing to train a neural network from scratch we can need a lot of data and take a long time to train, you can instead download maybe an open-source model that someone else has already trained on a huge dataset maybe for weeks and use those parameters as a starting point to then train your model just a little bit more on perhaps a smaller dataset that you have for a given task, so it is called transfer learning. Transfer learning, and I find it's really cool because we've been looking at convolutional neural networks up to this point, and when we're training with a convolutional neural network we're extracting features from an image and learning those features, learning the ones that match images. But when we download a large models something like an inception that has all of these features that have already been extracted off lots more images than we can use, I find that really exciting because as you mentioned researchers who built these wonderful models of lots of images have been able to share them and today we've been looking at convolutions and convolutional neural networks where from scratch we've been training something off of our data and off of our images to be able to extract features from these and learn them. But now these models that have been trained off with lots and lots of images over a lot of time might have spotted other features that weren't in our dataset and will help us to build better convolutional networks. Yes. There's so much stuff for neural networks to learn about the world. There are edges, corners, round shapes, curvy shapes, blobs and then there are things like eyes, circles, squares, wheels. There's so many things in the world that convolutional neural networks can pick up on but if you have only a 1,000 images or even 25,000 images may not be enough data for content to learn all those things. So by taking an inception network or some other network that someone else's has trained you can basically download all this knowledge into your neural network to give it a huge and much faster start. It's really exciting to be a part of that community where these things are being shared so that people who don't have access to machinery to be able to build massive models like this one, are able to derive the features from the people who have done so. I personally find that really inspirational. Yes. Many days I wake up really grateful to the whole AI community for the openness, so many breakthrough AI ideas are shared in papers on the Internet, on the open source code shared on the Internet, and I think this has been a large reason for the rapid advance of AI. I think to all the learners if you end up doing something consider sharing it on the Internet as well freely to contribute back to this open community that's making all of us go much faster. Definitely. Transfer learning is a technical embodiment that lets us use these ideas to accelerate the whole field. One of the cool things about transfer learning is that it's so simple to implement, in TensorFlow you download a model and then you say set these models as trainable and freeze or lock those other layers and then you just run. Yes. In the course what the students are going to be looking at is the inception model which is a beautiful image classification and classifies a 1,000 images it has been trained on. I think it's like over a million images and what you're going to do is you're going to take a look at one of the lower levels in that. You're going to lock all the way down to that. You're going to add a DNN a deep neural network underneath that, and then you're just going to retrain for those lower levels, and as a result using all of these, I like to call it standing on the shoulders of giants, you're going to be doing all that to be able to make the classifiers that you've been building in the course to date much more efficient and maybe even quicker to be able to reach higher levels of accuracy than if you're training it from scratch. So this week, you'll learn how to implement transfer learning and use it to get your models to not only train faster but also get higher accuracy. Please dive in.