Deploying ML models straight from Jupyter Notebooks

aguschin

Alexander Guschin

Posted on January 26, 2023

Deploying ML models straight from Jupyter Notebooks

Winter is a time of magic 🧙‍♂️. Everyone is waiting for something special at this time, and Data Scientists aren’t different. It is not in the power of software developer to be a magician, but I can help you deploy your models with literally a single command right from your Jupyter notebook (and basically from any place like your command line or Python script).

Sounds like magic? It is! đź’«

Streamlit

To get some winter season vibes, let’s do some magic ourselves first. Let’s do something that will help us prepare some fun for our friends for the weekends.

To do so, we’ll create a model that translates lyrics to emojis. With all due respect to recent advances in NLP and LLM algorithms, it’s still both easier and more fun to convince your friends to do the backward translation:

ChatGPT

Ok, I’m sure humans are up to the challenge!

Alright, just before we get into the actual coding, everything described in this blog post is available on this Google Colab notebook. Now, let's get to it!

First, let’s load an emoji dataset. We need something to base our model on, right?

Load emoji dataset

The secret sauce to creating our emoji language is using a pretrained Distilbert model to tokenize and create embeddings which represent our emoji dictionary:

Turn emojis into embeddings

We can now similarly embed any word and replace it with its “closest” emoji embedding to create our text→emoji translator. Using that, “Jingle bells” should become something like “🔔🔔”:

Find the closest emoji for each word

Good start - it guessed half of the emojis correctly!

Our part of magic is done, now to the single command deployment I promised in the beginning. Before we go rogue and deploy it to the cloud, let’s run a Streamlit app locally to test things out:

Serving with Streamlit

What happened here? That innocent looking mlem.api.save method inspected the model object to find all python packages to install did the magic of preparing the model to be used! 🪄✨

Now you should have a Streamlit app at localhost:80 that looks just like this:

Streamlit app

Once we finished playing around with the model locally, let’s cast our final spell for the day 🧙‍♂️ and deploy the model to fly.io:

Deployment to fly.io

Some elvish gibberish is printed to the command line, and you get a deployment up and ready.

Now, before you go, remember that these powers extends to serving models as REST API applications, Streamlit apps, building Docker Images and Python packages, and deploying them to Heroku, Flyio, Kubernetes, and AWS Sagemaker.

Or just go here to get a crash course :)

đź’– đź’Ş đź™… đźš©
aguschin
Alexander Guschin

Posted on January 26, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related