MLOps and Kubeflow Pipelines

Kubeflow Pipelines give you the connective tissue to train models with various frameworks, iterate on them, and then eventually expose them for the purpose of serving with Kserve. This means that once the work has been done in a Jupyter Notebook and the code converted to a Kubeflow Pipeline with Kale, the entire Model Development Life Cycle lives within the Kubeflow pipeline components and definitions. A well-defined pipeline has the following characteristics:

  • A quick way to give code access to the necessary data.
  • An easy way to pass the data between steps.
  • The flexibility to display outputs.
  • The ability to revisit / reuse data without repeatedly querying external systems.
  • The ability to opt-out of unnecessary steps that are meant for experimental purposes.
  • The confidence that imports and function dependencies are addressed.
  • The flexibility to define Hyperparameters and Pipeline Metrics within the  Jupyter environment and store them within pipeline definitions.
  • Everything can be easily managed and consolidated within the notebook(s) to prevent juggling tabs, tools, and toil.

For more technical detail on this specific topic please refer to the OSS documentation: https://www.kubeflow.org/docs/components/pipelines/introduction.

The Kubeflow Pipelines that Data Scientists create with Kubeflow are both portable and composable allowing for easy migration from development to production. This is because Kubeflow Pipelines are defined by multiple pipeline components, referred to as steps, which are self-contained sets of executable user code. Kubeflow Pipelines components are based on the compiled data science code, pipeline step container images, and associated package dependencies. Each step in the pipeline runs decoupled in Pods that have the code and reference the output of previous steps. Each container (often acting as a base for a pipeline step) can be stored in a centralized Container Repository. Managing containers via the Container Registry introduces versioning and model lineage tracking for future audits and reproducibility needs. Snapshots are taken of the Kubeflow Pipeline at the start of execution, before and after each step in the pipeline. Ideally, in the future state, each training run produces a model which is stored in a Model Registry and execution artifacts that are stored in their respective Artifact Stores. As a result, the entire lineage of the model is preserved and the exact steps that are required to create the model in another environment can be shared. This lays the foundation for the Continuous Integration and Continuous Deployment processes which ultimately push and support models in production. We will discuss Continuous Integration and Continuous Deployment towards the end of this course.