Now, it is time to go over the main phases of a machine learning lifecycle, and them, to the components or tasks, within ML apps. When we look at machine learning projects, we identify three main phases, a discovery phase, a development phase, and the deployment phase. For the discovery phase, identifying the business need, in its use case, allows for a clear plan of what the machine learning model will help us achieve. This phase is crucial, because it will establish the problem, or task, that needs to be solved. And how solving it, will affect the business and the users consuming the product or solution argument by machine learning. This phases is also when data exploration happens, recognizing what data sets are needed, whether the needed data is readily available, and sufficient to train the model. And whether external data sets will be beneficial on how to acquire them, all of these, are considerations that involve the data exploration step. Then, depending on the task should be performed, an algorithm is chosen by the data science team. The combination of data availability and algorithm, along with the decision of buying versus building the solution, becomes an important consideration for feasibility assessment. Where the team tries to uncover any problems that may arise, during the development phase. One example, is that if for the specific use case equation, the data is available historically, but not for inference time. In that case, the particular scenario, might make the use case and feasible for ML, in the more through analysis may have to be performed, before the use case can be pursued further. Another aspect of the discovery phase, is prioritizing the different use cases that the business has, that can become potential ML projects, but that discussions out of the scope of this course. Now, for the development phase, you may ask, how does development start on this chart during data exploration? Shouldn't we wait until the result of the feasibility study? What happens in reality is that, even for data exploration, and algorithm selection, some proofs of concept may need to be developed, and that is what we refer to, here. After the feasibility assessment gives the go ahead, the real development starts. All the data steps, such as cleaning, extracting, analyzing and transforming, will be implemented during the data pipeline creation. The data pipeline, evolves, ensuring that all the operations needed on the data, for both offline and streaming, for training and inference also, will be performed consistently, to avoid data skew. After the data is ready, building and evaluating the model begins, and I say begins, because these steps may need a couple of iterations until the data scientists are happy. With the results, and ready to present them to the main stakeholders. Considerations include, the use case should be revisited, because the learning algorithm isn't capable of identifying patterns on the data for that task. Data, should be revisited, because the model either needs more of it, or needs additional aspects, new features maybe, from the existing data. Some additional transformations are needed to improve the model quality, or even a different algorithm, is perceived as a better choice, there are numerous possibilities. So this iteration will happen as many times as needed, until the model reached the desired performance. After results are presented, and stakeholders are satisfied with how the model is performing, it is time to plan for model deployment. This is when the following questions will likely arise, which platform should host my model? Which service should I pick for model serving? How many notes should the cluster have, taking scale, and take care of all the demand, in a cost effective manner. Operationalizing, and monitoring the model, will allow for maintain ability, in avoiding model decay, as we discussed. Having a strategy in place to detect concept of data drifts, will allow signaling, when the model should be retrained, or data should be adjusted, or argumented. Ensuring that your pipeline considers all the necessary tasks for health checks, and alerts, is the most effective way, to avoid the satisfaction from the users consuming your model's predictions. Focusing on the development and the deployment phases, we see that they have multiple steps, for data exploration, for example, that is data extraction, data analysis, and data preparation. The model building, comprises training, evaluation, and validation, deployment requires hosting the train model and serving it, and having a prediction service ready to handle requests. And finally, monitoring to allow for continuous evaluation, in training, based on the performance results at a given point. The level of automation of these steps to find the maturity of the process, which reflects the velocity of training new models giving you data, or training new models giving new implementations. Many ML professionals build and deploy the ML models manually, we call this maturity level zero. All the data scientists perform continuous training of their models, by automating the ML pipeline, this is maturity level one. Finally, the most mature approach, completely automates, and integrates the ML training, validation, and deployment phases, this is maturity level two. You and your team have probably begun, we still are, a maturity level zero, and that's nothing to worry about. Our goal here, is to help you automate your processes and move up, the automation ladder, with the suite of tools and services, available at google cloud, stay tuned, and have fun.