Now, let's walk through the phases of developing and publishing a machine learning model. Think of MLOps as a life cycle management discipline for machine learning. It's goal is a balanced process for approach to the management of resources, data, code, time, and quality to achieve business objectives and meet regulatory concerns. Some of the concepts from DevOps translate directly to MLOps. When software developers work on a project, they don't all work on the same code at the same time. Instead, they check out the code they intend to work on from a code safe, they merge it back when their task is finished. Before the code is returned to the safe, the developer checks that nothing has changed in the main version, and then unit test the updates before merging the code back together. The more frequently these changes are merged with the main code, the less chance that is a divergence. This process is called continuous integration, or CI. In a busy development team, this happens tens of times a day. Another process favored by developers is continuous delivery, or CD. This is a method for building, testing, and releasing software in short cycles. Done this way, the main development code is almost always production ready and can be released into the live environment at anytime. If it is not done this way, the main code is like a race car with its wheels off and its engine out. It can go fast, but only after it's put back together. Continuous delivery can be done either manually or automatically. Continuous integration of source code, unit testing, integration testing, and continuous delivery of the software to production are important processes in machine learning operations too. But there is another important aspect to MLOps, that's right, data. Unlike conventional software that can be relied on to do the same thing every time, an ML model can go off. By this, we mean that it's predictive power wings as data profile changes, which they inevitably do. So we can build on continuous integration and continuous delivery, and introduce a new term, continuous training or CT. Continuous training is the process of monitoring, measuring, retraining, and serving the models. MLOps differs from DevOps in important ways too. Continuous integration is no longer only about testing and validating code and components, but also about testing and validating data, data schemas and models. It is no longer about a single software package or service, but a system: the ML training pipeline that should automatically deploy another service, the model prediction service. Uniquely, ML is also concerned with automatically monitoring, retraining, and serving the models. Another concept that transfers well from software development machine learning is technical debt. Software developers are familiar with time, resources, and quality trade-offs. They talk about technical debt which is the backlog of free work that builds up because sometimes they have to compromise on quality in order to develop code quickly. They understand that although there may have been good reasons to do this, they will have to go back and fix things later. This is an engineering inversion of the common saying, putting off until tomorrow what is better done today. There is a price to pay. Machine learning could arguably be considered the high-interest credit card of technical debt. This means that developing and deploying an ML system can be relatively fast and cheap, but maintaining it over time can be difficult and expensive. The real challenge isn't building an ML model, it is building an integrated ML system and continuously operating it in production. Just like a high-interest credit card, the technical debt with machine learning compounds and can be incredibly expensive and difficult to pay down. Machine learning systems can be thought of as a special type of software system. Operationally, they have all the challenges of software development, plus a few of their own. Some of these include multi-functional teams, because ML projects will probably have developers and data scientists working on data analysis, model development, and experimentation. Much of functional teams can create the all management challenges. Machine learning is experimental in nature. You must constantly try new approaches with the data, the models, and parameter configuration. The challenge is tracking what worked and what didn't, and maintaining rate of disability while maximizing cause re-usability. Another consideration is that testing an ML system is more involved than testing other software systems, because you're validating data, parameters, and code together in a system instead of unit-testing methods and functions. In ML systems, deployment isn't as simple as deploying an offline train ML model as production service. ML systems can require you to deploy a multi-step pipeline to automatically retrain and deploy models. Finally, concerns with concept drift and consequent model decay should be addressed. Data profiles constantly change. If something changes in the data input, the predictive power of the model in production will likely change with it. Therefore, you need to track summary statistics of your data and monitor the online performance of your model to send notifications or roll back when values deviate from your expectations. Technical debt builds up in an ML system for many reasons, so we'll be looking at ways to mitigate that throughout this course.