Skip to content

Prepare for HyperParameter Tuning - Ideal Model

Before you can perform hyperparameter tuning you must select the model you wish to tune. To do this you will run several models and evaluate quality metrics on the models. Once the ideal model has been selected you are ready to proceed to hyperparameter tuning.

Identify Model for Tuning

The ideal model for tuning is typically the model with the highest training accuracy per the quality metrics. As a first step you need to run the Jupyter Notebook with Kale enabled to generate model test results. Before running your model make sure that you are outputting model measurements.

Follow Along

Please follow along in your own copy of our notebook as we complete the steps below.

1. Confirm Output from Eval Steps

Notice that the eval_* steps contain code to print out model quality measurements.

output code

Each cell that performs evaluation should have a similar line of code.

2. Compile and Run Notebook with Kale

Select Compile and Run to test the models using Kubeflow pipelines.

compile and run

Confirm successful execution by viewing the output from Kale.

compile and run success

3. Access Kubeflow Pipeline

Click View next to Running pipeline to access the relevant Kubeflow pipelines logs for analysis.

view pipeline

Once in the pipeline display scroll down to observe three pipelines, executed in parallel, for the three models.

view car price pipelines

4. Review A Model

Click eval_lgbm and then click Logs to see the quality metric output.

eval model logs

Scroll until you see the three numbers together, these are the output from the eval_step for each model.


Recall that the metrics are presented as follows

  • r squared
  • mean squared error
  • mean squared logarithmic error

Take note of these three numbers. You will need them to compare against the other models in the subsequent lab.