Code-free machine learning with Ludwig

chrishunt

Chris Hunt

Posted on March 9, 2019

Code-free machine learning with Ludwig

Intro and Ludwig

At the start of February 2019, Uber made their code-free machine learning toolbox, Ludwig, open-source.

Website - https://uber.github.io/ludwig/
User guide - https://uber.github.io/ludwig/user_guide/
Github repo - https://github.com/uber/ludwig/

Ludwig runs on top of the popular and powerful TensorFlow library and offers CLI access to experiment and train machine learning models and predict using TensorFlow models

As an engineer, I'm absolutely not a data scientist. I know enough around TensorFlow to build the most basic of models using tutorials but really couldn't create anything from scratch. Ludwig offered that opportunity.

Our first experiment

Let's dive in and run through a basic example. We're going to try to recreate the Keras tutorial at https://www.tensorflow.org/tutorials/keras/basic_regression with zero lines of code.

The dataset shows basic data to cars in the Auto MPG dataset. Our task is to predict the MPG from the features provided. I've grabbed this and converted it to a CSV file for use in this example.

Ludwig uses a model definition file to determine the parameters for building the model. The internals of Ludwig deals with your data. It creates train, test and validation datasets. It also modifies the data into the best format for training depending on the data type you've specified.

The Keras example needs us to manipulate the data in order to train and test the model. Ludwig does all of this for us. It allows us to train the model immediately by setting up a model definition file at modeldef.yaml. Here we define the input features and their data type. There are a number of other parameters against each feature which can be set for more complex models. We also define the output feature and its parameters.

input_features:
  - 
    name: Cylinders
    type: numerical
  - 
    name: Displacement
    type: numerical
  - 
    name: Horsepower
    type: numerical
  - 
    name: Weight
    type: numerical
  - 
    name: Acceleration
    type: numerical
  - 
    name: ModelYear
    type: numerical
  - 
    name: Origin
    type: category
output_features:
  - 
    name: MPG
    type: numerical

Enter fullscreen mode Exit fullscreen mode

First run

Our first experiment can now be run with the following command:

ludwig experiment  --data_csv cars.csv --model_definition_file modeldef.yaml --output_directory results
Enter fullscreen mode Exit fullscreen mode

This gives the following results:

===== MPG =====
loss: 52.9658573971519
mean_absolute_error: 6.3724554520619066
mean_squared_error: 52.9658573971519
r2: 9.58827477467211e-05
Enter fullscreen mode Exit fullscreen mode

After 200 epochs complete, I have a mean absolute error (MAE) of 6.4 (yours may vary slightly depending on the random train/test split). This means that on average MPG prediction on a car is 6.4MPG from the actual value. Bearing in mind that values are generally between 10MPG and 47MPG, that 6.4MPG represents quite a large error.

Refinement

If you were watching the log scrolling as Ludwig was running, you'd have seen the MAE against the validation set reducing with each epoch.

The Keras example was suggesting a final MAE of ~2 so we may need a bit of tweaking to get closer. There was fair indication that the MAE was still decreasing as the run ended. We can increase the amount of epochs with a simple addition to the addition to the model definition

training:
  epochs: 400
Enter fullscreen mode Exit fullscreen mode

and continue from the previous training model with the command

ludwig experiment  --data_csv cars.csv --model_definition_file modeldef.yaml --output_directory results -mrp ./results/experiment_run_0
Enter fullscreen mode Exit fullscreen mode

Our MAE only comes down to 5.3MPG. Still not that close.

Further refinement

In a real life example, we'd start amending the hyperparameters, retraining, amending and retraining again while our target MAE still reduces.

We'll skip this step by replicating the hyperparameters from the Keras tutorial:

training:
  batch_size: 32
  epochs: 400
  early_stop: 50
  learning_rate: 0.001
  optimizer:
    type: rmsprop
Enter fullscreen mode Exit fullscreen mode

In addition, we set early stop at 50 epochs - this means that our model will stop training if our validation curve doesn't improve for 50 epochs. The experiment is fired off in the same way as before. It produces these results:

Last improvement of loss on combined happened 50 epochs ago

EARLY STOPPING due to lack of validation improvement, it has been 50 epochs since last validation accuracy improvement

Best validation model epoch: 67

loss: 10.848812248133406
mean_absolute_error: 2.3642308198952975
mean_squared_error: 10.848812248133406
r2: 0.026479910446118703
Enter fullscreen mode Exit fullscreen mode

We get a message that our model has stopped training at 132 epochs because it's hit the early stop limit.

MAE is down to 2.36MPG without writing a line of code and we've got our example to similar results to the Keras tutorial.

Visualising our training

Now we'd like to validate that our test and validation loss curves are getting pretty close but not showing overfitting. Ludwig continues to deliver on its promise of a no-code solution. We can view our learning curves with the following command:

ludwig visualize -v learning_curves -ts results/experiment_run_0/training_statistics.json
Enter fullscreen mode Exit fullscreen mode

Learning curves

The curves remain following a similar trajectory. Should the validation curve start heading upwards while the training curve remain on this trajectory, it would suggest that overfitting is occurring.

Real life validation

Ok, this is all well and good but tutorials notoriously pick and choose data so the output "just works". Let's try our model out with some real data.

With a bit of investigation, I've dug out the required stats of the DeLorean DMC-12 (https://en.wikipedia.org/wiki/DMC_DeLorean):

Cylinders:     6
Displacement:  2849cc (174 cubic inches)
Horsepower:    130hp
Weight:        1230 kg (2712 lb)
Acceleration:  10.5s
Year:          1981
Origin:        US
Enter fullscreen mode Exit fullscreen mode

and converted it to the same CSV format as the training data:

Cylinders,Displacement,Horsepower,Weight,Acceleration,ModelYear,Origin
6,174,130,2712,10.5,81,1

Enter fullscreen mode Exit fullscreen mode

Now, to predict the fuel economy of this, we run the predict command through Ludwig:

ludwig predict --data_csv delorean.csv -m results/experiment_run_0/model -op
Enter fullscreen mode Exit fullscreen mode

We specify the -op flag to tell Ludwig that we only want predictions. Inputting a CSV file with MPG column and not adding this flag will run the predictions but also provide us with statistics against actual values supplied in the file.

The result given by my model is 23.53405mpg. How good is this? Unfortunately our Wikipedia article doesn't show the published fuel economy but I did manage to find it in this fantastic article about the amazing car - 22.8mpg. A pretty decent real life test!

Summary

I appreciate that the data scientists out there are screaming that we didn't run through any analysis on the input features to create a meaningful feature set and that we didn't run specific analysis on the test data predictions. I also appreciate that MAE isn't necessarily the ultimate measure of accuracy as it may be skewed heavily by outliers which we could have validated through further analysis.

What we have shown is that using Ludwig, we can experiment and train a machine learning model and then predict using the model we've trained.

Machine learning is becoming more and more accessible. Ludwig seems to be big step forward in that regard.

💖 💪 🙅 🚩
chrishunt
Chris Hunt

Posted on March 9, 2019

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related