Hyperparameter Tuning w/ Katib Overview

Hyperparameter Tuning with Kubeflow’s AutoML component, Katib, optimizes machine learning models. Katib is agnostic to machine learning frameworks and performs hyperparameter tuning, early stopping, and neural architecture search written in various languages. Automated machine learning (AutoML) enables non-data science experts to make use of machine learning models and techniques and apply them to problems through automation. AutoML attempts to simplify

Hyperparameter Definition

Values set beforehand Values estimated during training with historical data
External to the model Part of the model
Values are not saved as they are not part of the trained model Estimated value is saved with the trained model
Independent of the dataset Dependent on the dataset the system is trained with
Tip

If you have to specify the value beforehand, it is a hyperparameter.
Examples:

  • A learning rate for training a neural network aka “early stopping”.
  • The c and sigma hyperparameters for support vector machines.
  • The k in k-nearest neighbors.

Hyperparameter Tuning & Kale

You will continue to use Kale when working with Katib to orchestrate experiments so that every Katib trial is a unique pipeline execution.