Machine Learning Model Deployment with FastAPI and Docker

code_jedi

Code_Jedi

Posted on September 24, 2023

Machine Learning Model Deployment with FastAPI and Docker

Machine learning model deployment is a crucial step in making your ML models accessible to users and applications. FastAPI, a modern Python web framework, and Docker, a containerization platform, have gained popularity for their efficiency and simplicity in deploying machine learning models. In this tutorial, we'll walk through the process of deploying a machine learning model using FastAPI and Docker, making it accessible via a RESTful API.

Before we get into this article, if you want to learn more on Machine Learning and Docker, I would recommend the tutorials over at Educative, who I chose to partner with for this tutorial.

Prerequisites

Before we begin, ensure you have the following:

  1. Python and pip installed on your system.
  2. Basic understanding of machine learning and Python.
  3. Docker installed on your system. You can download it from the official website: https://www.docker.com/get-started.

Create a Machine Learning Model

For this tutorial, we'll use a simple scikit-learn model to classify iris flowers. You can replace it with your own trained model.

  1. Create a Python script (e.g., model.py) and define your model:
   import joblib
   from sklearn.datasets import load_iris
   from sklearn.ensemble import RandomForestClassifier

   # Load the iris dataset
   iris = load_iris()
   X, y = iris.data, iris.target

   # Train a random forest classifier
   model = RandomForestClassifier()
   model.fit(X, y)

   # Save the trained model
   joblib.dump(model, 'model.joblib')
Enter fullscreen mode Exit fullscreen mode
  1. Run the script to train and save your model:
   python model.py
Enter fullscreen mode Exit fullscreen mode

Create a FastAPI App

Now, let's create a FastAPI app that serves the machine learning model as a RESTful API.

  1. Create a new directory for your FastAPI app:
   mkdir fastapi-docker-ml
   cd fastapi-docker-ml
Enter fullscreen mode Exit fullscreen mode
  1. Install FastAPI and Uvicorn:
   pip install fastapi uvicorn
Enter fullscreen mode Exit fullscreen mode
  1. Create a FastAPI app script (e.g., app.py) and define the API:
   from fastapi import FastAPI
   import joblib
   import numpy as np

   app = FastAPI()

   # Load the trained model
   model = joblib.load('model.joblib')

   @app.get("/")
   def read_root():
       return {"message": "Welcome to the ML Model API"}

   @app.post("/predict/")
   def predict(data: dict):
       features = np.array(data['features']).reshape(1, -1)
       prediction = model.predict(features)
       class_name = iris.target_names[prediction][0]
       return {"class": class_name}
Enter fullscreen mode Exit fullscreen mode

Create a Dockerfile

To containerize our FastAPI app, we'll create a Dockerfile.

  1. Create a file named Dockerfile (without any file extensions) in the same directory as your FastAPI app:
   # Use the official Python image
   FROM python:3.9

   # Set the working directory in the container
   WORKDIR /app

   # Copy the local code to the container
   COPY . .

   # Install FastAPI and Uvicorn
   RUN pip install fastapi uvicorn

   # Expose the port the app runs on
   EXPOSE 8000

   # Command to run the application
   CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
Enter fullscreen mode Exit fullscreen mode

Build and Run the Docker Container

With the Dockerfile in place, you can now build and run the Docker container.

  1. Build the Docker image:
   docker build -t fastapi-docker-ml .
Enter fullscreen mode Exit fullscreen mode
  1. Run the Docker container:
   docker run -d -p 8000:8000 fastapi-docker-ml
Enter fullscreen mode Exit fullscreen mode

Test the API

Your FastAPI app is now running in a Docker container. You can test it by making POST requests to the /predict/ endpoint:

curl -X POST "http://localhost:8000/predict/" -H "accept: application/json" -H "Content-Type: application/json" -d '{"features": [5.1, 3.5, 1.4, 0.2]}'
Enter fullscreen mode Exit fullscreen mode

This will return the predicted class for the given input features.

Conclusion

You've successfully deployed a machine learning model using FastAPI and Docker, creating a RESTful API that can be accessed from anywhere. This approach allows you to easily scale your ML model deployment and integrate it into various applications and services. Explore further by enhancing your FastAPI app, adding authentication, and optimizing your Docker container for production use.

💖 💪 🙅 🚩
code_jedi
Code_Jedi

Posted on September 24, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related