From legacy to cloud serverless - Part 2

davwk

David WOGLO

Posted on September 4, 2024

From legacy to cloud serverless - Part 2

Note: This article was originally published on Nov 12, 2023 here. It has been republished here to reach a broader audience.

Hey, how's it been since the last article? If you haven't had a chance to check out the previous installment in the series, I invite you to discover it here. Perhaps you've already tackled something similar to what was described in the previous article, and this one seems to be a good resource to continue your project. Welcome aboard!

In this article, we'll be transforming Docker Compose services into Kubernetes objects and deploying them in a Kubernetes environment.

To follow along, you'll need some knowledge of Kubernetes, have completed the lab described in the previous article, or have done something similar, make sure you have a Kubernetes environment ready. As of now, I'm using Digital Ocean's Kubernetes Engine. I mention 'as of now' because if you've been here from the beginning, you're probably aware that our project's ultimate goal isn't just deploying on K8s. It's a journey of migrating a traditional app to a serverless cloud setup. The next step in this series will involve migrating to Google Cloud. Oh, did I forget to mention? I'm all about Google Cloud—I recently even snagged my Professional Cloud Architect certification. So, expect Google Cloud to pop up regularly in my discussions, and the rest of this series will be purely GCP-focused.

Enough chatter, let's dive into the real stuff!

Build the application image and push it to the Docker registry

If you haven't done so already, I invite you to clone our project's repo here. Navigate to the docker folder, where all the Docker-related elements of the project are stored. Explore the content a bit, and once you're ready, come back, and let's continue. If you don't have a Docker Hub account yet, I recommend creating one.

Now, in your terminal, log in with docker login using your Docker Hub account information. After that, build the image, tagging it with your username and the image name.

docker build -t <username>/<image-name> .
Enter fullscreen mode Exit fullscreen mode

Finally, push the image to Docker Hub.

docker push <username>/<image-name> 
Enter fullscreen mode Exit fullscreen mode

Export MongoDB data

As part of our migration process, it's crucial to ensure we retain our data. To achieve this, let's export the data stored in the MongoDB container that we'll later use when deploying MongoDB on Kubernetes.

Export the existing MongoDB database from the Docker Compose setup:

  1. Access the MongoDB database container shell.

    docker exec -it <mongo_db_service> bash
    
  2. Export all data from the MongoDB database.

    mongodump <file name>
    
  3. Exit the MongoDB database container shell.

    exit
    
  4. Copy the 'dump' folder from the MongoDB container to a specified destination.

    docker cp <mongo_db_service>:/dump <destination>
    

Install MongoDB on Kubernetes

Now, while connected to the Kubernetes cluster, let's install MongoDB using Helm:

helm install mongo-helm oci://registry-1.docker.io/bitnamicharts/mongodb --set auth.rootUser=root,auth.rootPassword="defineYourRootPassword"
Enter fullscreen mode Exit fullscreen mode

This command leverages Helm, a Kubernetes package manager, to install MongoDB from a chart hosted on Docker's registry.

  • The part --set auth.rootUser=root,auth.rootPassword="DefineYourPassword" specifies the username and password for the MongoDB root user.

Make sure to save the output of this command; we'll be using it to construct the database connection URI.

Verify that everything is installed correctly with the following commands:

kubectl get pods
kubectl get services
Enter fullscreen mode Exit fullscreen mode

Restore data

It's time to restore the database:

  1. Navigate to MongoDB Kubernetes pod

    kubectl exec -it --namespace default mongodb_pod -- /bin/bash
    mongosh
    use admin
    db.auth('root', 'password')
    
  2. Create a non-root MongoDB user:

db.createUser({
  user: 'username',
  pwd: 'password',
  roles: [
    { role: 'readWriteAnyDatabase', db: 'admin' },
    { role: 'dbAdminAnyDatabase', db: 'admin' },
    { role: 'clusterAdmin', db: 'admin' }
  ]
})
exit
Enter fullscreen mode Exit fullscreen mode
  1. Restore the Docker Compose database dump to the new MongoDB pod:

Copy the database dump folder previously copied into the MongoDB pod:

kubectl cp <mongodb_dump_location_filename> <mongodb_pod>:/tmp/
Enter fullscreen mode Exit fullscreen mode

Navigate to the MongoDB pod shell:

kubectl exec -it --namespace default mongodb_pod -- /bin/bash
Enter fullscreen mode Exit fullscreen mode

Change the directory to the dump directory and list all MongoDB folders to verify the contents:

cd /tmp/dump
ls
Enter fullscreen mode Exit fullscreen mode

Restore the app database:

mongorestore --uri="mongodb://username:password@localhost:27017/?authSource=admin" app_db -d app_db 
Enter fullscreen mode Exit fullscreen mode

Here's how it's formed:

  • mongodb://: This is the prefix to identify that we're connecting to a MongoDB instance.

  • <username>:<password>@: This part specifies the username and password to connect to the MongoDB instance. You would replace <username> and <password> with the actual username and password. In your case, the username is the one created earlier.

  • mongo-helm-mongodb.default.svc.cluster.local:27017: This is the host and port where the MongoDB server is running. mongo-helm-mongodb.default.svc.cluster.local is the DNS name for the MongoDB service in your Kubernetes cluster, and 27017 is the default port for MongoDB.

Exit the MongoDB pod shell:

exit
Enter fullscreen mode Exit fullscreen mode

The new MongoDB is now ready for use.

Deploy and connect the application to the database

First, let's create the Kubernetes secret that will contain the connection string for the database. We're using the secret object because our connection string contains sensitive information. Kubernetes provides the secret object precisely for scenarios like this. If it were just configuration information or environment variables, a ConfigMap object would be more suitable.

apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  mongo-uri: <base64-encoded-mongo-uri>
Enter fullscreen mode Exit fullscreen mode

Create a YAML file and paste this content into it. Name the file as you see fit. Note that the mongodb-uri field under data should contain the base64-encoded MongoDB URI. Replace the placeholder with the actual base64-encoded connection string.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: legacy-to-cloud-deployment
  labels:
    app: legacy-to-cloud
spec:
  replicas: 3
  selector:
    matchLabels:
      app: legacy-to-cloud
  template:
    metadata:
      labels:
        app: legacy-to-cloud
    spec:
      containers:
      - name: legacy-to-cloud
        image: docker_username/image_name:tag
        ports:
        - containerPort: 5000
        env:
        - name: MONGO_URI
          valueFrom:
            secretKeyRef:
              name: mongodb-uri-secret
              key: mongodb-uri

---
apiVersion: v1
kind: Service
metadata:
  name: legacy-to-cloud-service
spec:
  type: LoadBalancer
  selector:
    app: legacy-to-cloud
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5000
Enter fullscreen mode Exit fullscreen mode

Create a second YAML file for your application's manifest and paste this content into it. The env section of the container in the Deployment references the MongoDB URI from the secret we created earlier. Ensure that the secret name and key match the values used in the secret manifest. Also, ensure that the selector in the Service matches the one in the Deployment. This is crucial for linking the pods to the service.

If everything looks good, let's proceed with deploying our application. You can use the following command to validate the syntax of your YAML file and perform a dry run:

kubectl apply -f filename.yaml --dry-run=client --validate=true
Enter fullscreen mode Exit fullscreen mode

This command checks the syntax of your YAML file and prints out the resources that would be created or modified without actually applying the changes. If there are any syntax errors, this command will highlight them.

If everything is okay, create the resources with the following command:

kubectl apply -f filename1.yaml -f filename2.yaml
Enter fullscreen mode Exit fullscreen mode

Replace filename1.yaml and filename2.yaml with the actual names of your YAML files.

Get the access IP address with the command:

kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Identify the service for your application and copy its external IP. Paste it into your browser to access the application.

Well, that wraps up this section on the migration to Kubernetes.

A little gift for the road?

Haha, did you know there's a tool to speed things up? Because here, we've created YAML manifests to deploy K8s resources. This deployment is a simple one, but imagine if it were a massive deployment with hundreds of Docker Compose services, unimaginable complexities, etc. Would we sit down and manually create manifests for all that complexity? Of course not :) Enter Kompose. Kompose is a conversion tool for Docker Compose to container orchestrators like Kubernetes. It takes a Docker Compose file and translates it into Kubernetes resources.

Kompose is a handy tool for those familiar with Docker Compose but aiming to deploy their application on Kubernetes. It automates the creation of Kubernetes deployments, services, and other resources based on the services defined in the Docker Compose file.

However, it's worth noting that not all Docker Compose features and options are supported by Kompose, so some manual tweaking of the generated Kubernetes resources might be necessary. Here's an excellent guide that addresses our use case well.

What next?

And that's a wrap for this article! In the next one, we're heading to the GOOGLE CLOUUUUUUUD :) and beginning to introduce DevOps tools and practices to automate and speed up our work. We're talking about stepping up the game. We'll be using Google Cloud DevOps tools—Cloud Build for CI/CD, Artifact Registry for container images, GKE for deployments. Plus, we'll dive into DevSecOps tools and practices, leveraging the security available within the Google Cloud ecosystem.

Thanks for reading, and see you soon in the next article in the series!"

💖 💪 🙅 🚩
davwk
David WOGLO

Posted on September 4, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related