Let's recap our work. We completed the end-to-end process for operationalizing machine learning models, beginning with data exploration and visualization. We then worked with a sub-sample of the data set to develop our TensorFlow model. Once we prototyped our model locally, we used Cloud Dataflow to create our training and evaluation sets. Using the entire dataset, we then trained our model using Cloud ML Engine. Using this trained model, we served up a prediction service that an end-user was able to consume via a Flask application that we deployed using App Engine. So, at a high level, the first step was exploring and visualizing our Natality Dataset. Our tool of choice here was using Cloud Datalab to interactively query and visualize the Natality Dataset. Remember, even though our dataset was tens of millions of rows and size, we utilized aggregate functions and BigQuery which allowed us to visualize the dataset in Datalab. Once we got a handle on our dataset, we then created a sample dataset. Why did we do this? So, that we could have a small manageable dataset that we could use to train a model locally. This allowed us to quickly prototype a model that once ready, we could later scale up to the entire dataset. Because we were working with only a few thousand rows, we did all of our pre-processing using pandas which works quite well for small datasets. Of course, we ran into problems once we started working with larger data sets. Using our newly created sample dataset, we developed a TensorFlow model locally. For our model, we utilized the TensorFlow estimator API. In our case, we use canned estimators, perhaps a linear model, a deep neural network model, or a wide and deep model that combines both techniques. After building our TensorFlow model, we created training and evaluation data sets using the entire dataset. We ran a Cloud Dataflow job to pre-process and create CSV files for both our training and evaluation data sets. As a reminder, Cloud Dataflow utilizes the beam API and provides a runner that executes Data Processing Pipelines at scale using a serverless architecture. Once we created our training and evaluation datasets, we then executed training in the Cloud. So, remember we first built our model locally using Datalab Notebook, but in this section, we actually packaged up our code and submitted an ML Engine job. Analogous to Cloud Dataflow using ML Engine allows us to scale our processing in a serverless environment. Key benefits are this gives us dynamic provisioning and automatic scaling. So, you only pay for what you use, plus you get to train at Cloud scale. Once we trained our model, we were able to use it to deploy a prediction service. Using ML Engine, you specify the name of your model, the version number, and the actual location of the TensorFlow models you created during training. Once you have these, we deployed our model in just a few lines of code. For the final step, we invoked our ML prediction as a client. We deployed a Flask application using Python and App Engine. In this app, the users enter inputs into HTML form which was passed as a JSON request to the deployed ML model. ML Engine made a prediction responding back with the prediction from the back end to the front end, and the front end then displays the return value. We hope you have enjoyed this section on end-to-end Machine Learning and learning how to operationalize a machine learning model. In this next section we'll talk about how to train, deploy, and predict with ML models in a way that they are production ready. We'll consider the factors that we must consider when building a real-world ML system.