MLOps and Continuous Training
When you develop a model using Kubefow Pipelines, you create a stand-alone process that itself is dedicated to training the model. The model needs to be continually updated in response to changes in the data or feedback from the monitoring service observing the model in production. The Continuous Training process in an MLOps environment is designed to maintain the model that ultimately makes its way to production. The steps you took in this course directly follow the steps of the Continuous Training process that will ultimately be utilized in an MLOps deployment.
Initially, you downloaded the data, performed data exploration, and identified the features necessary for building the model. In a mature environment the data would have been pulled from an External Data Source, the features would have been pulled from a Feature Store and the model build would have been based on these standard inputs. These standard inputs would also have cleaned data made available for the model development life cycle. The Kubeflow Pipeline that creates the model is simply just a stage in a larger Continuous Training process that repeatedly triggers on a scheduled basis, in response to changes in the data set or to updates to the Feature Store. Since the Kubeflow Pipeline is snapshotted and stored in Rok, re-training the model is simply a matter of automation, loading the snapshot with the cleaned data, and executing the training pipeline.
Continuous Training is not just about repeatedly creating a model - it is about repeatedly creating, testing, and validating the ideal model. In this course, you did the same thing - once a suitable model was identified based on the features selected you performed Hyperparameter Tuning to create the ideal model. Since Hyperparameter Tuning is done with Kubeflow Pipelines which push execution output to an Artifact Store the ideal model can be programmatically identified. Through automation, Continuous Training is not just creating a model, but creating the ideal model by iterating through many options and identifying the best fit to ship to production.
However, just having the ideal model selected based on Hyperparameter Tuning is not enough to close out the Continuous Training process. For a model to be truly considered trained and ready to move to integration it must be tested. This is what is referred to as “shift-left” in a DevOps practice and an approach that MLOps borrows. By performing as much testing as early as possible, future maintenance or production issues can be avoided. This is often referred to as a “smoke test”.
Finally, a Continuous Training process must push the Kubeflow Pipeline to a central repository so that a Continuous Integration process can migrate the model across environments. Rok Registry allows for seamless migration of Kubeflow Pipeline snapshots so that the model creation process can be identically replicated in any destination environment separate from where the Continuous Training is taking place.
The significance of a Continuous Training process cannot be understated - it is the result of the happy marriage between the Model Development Life Cycle and DevOps principles that support MLOps.