DigitalOcean Kubernetes Challenge - MongoDB On Kubernetes

somsubhra1

Somsubhra Das

Posted on November 28, 2021

DigitalOcean Kubernetes Challenge - MongoDB On Kubernetes

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

What is DigitalOcean Kubernetes (DOKS)?

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that helps us to deploy Kubernetes clusters hassle free without needing to handle the control panel and containerised infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.

Why should you use Kubernetes?

Here are some of the benefits of using Kubernetes:

  • Auto Scaling
  • Self Healing
  • Load Balancing
  • Auto Rollbacks

Getting Started

To get started with DOKS first, there are a few prerequisites.

  • DigitalOcean Account: K8s
  • doctl CLI: This helps to quickly spin up a K8s cluster on DigitalOcean and adds all config files to our system automatically
  • kubectl CLI: Helps to interact with our K8s cluster by running commands against it.

How to create a K8s cluster?

On DigitalOcean, K8s cluster can be created in many ways, though DO dashboard, doctl CLI etc. Here we are going to use DigitalOcean dashboard to create the Cluster.

Go to the Kubernetes Create Cluster Dashboard.

Kubernetes Create Cluster DashboardKubernetes Create Cluster DashboardKubernetes Create Cluster Dashboard

  • Select the K8s version.
  • Choose your nearest datacenter location
  • Choose cluster capacity, number of nodes, machine type.
  • Then click Create Cluster

The cluster will be created within a few minutes.

Connecting to K8s Cluster

After the successful creation of the cluster, you will be greeted with a panel like this:

DO K8s panel

Scroll down to the bottom section, you will find a doctl command to run which automatically saves your K8s configuration on your local machine.

doctl kubernetes cluster kubeconfig save use_your_cluster_name
Enter fullscreen mode Exit fullscreen mode

Now you have successfully saved your Auth config to your K8s cluster to your local machine. Now let's get started with the MongoDB Deployment.

How to deploy MongoDB on DOKS?

Now we are ready to use kubectl CLI and run commands against our K8s cluster to deploy a MongoDB instance.

TL;DR

Clone the following GitHub Repository

git clone https://github.com/Somsubhra1/Digitalocean-Kubernetes-Challenge.git
cd Digitalocean-Kubernetes-Challenge
Enter fullscreen mode Exit fullscreen mode

Run the following command to setup everything in one go:

kubectl apply -f .
Enter fullscreen mode Exit fullscreen mode

Skip to the connection part by clicking here.

Creating MongoDB Secrets

Secrets in Kubernetes are the objects used for supplying sensitive information to containers.

To secure our MongoDB instance, we should always restrict access to the database with a password. Here we will use secrets to invoke our desired passwords to the containers.

Create the following file and name it mongodb-secrets.yaml.

apiVersion: v1
data:
  password: ZG9rOHNtb25nbwo= # dok8smongo
  username: YWRtaW4K # admin
kind: Secret
metadata:
  creationTimestamp: null
  name: mongo-creds
Enter fullscreen mode Exit fullscreen mode

P.S: The above username and password are encoded in base64 format.

Now run the following command to apply the changes to our K8s cluster.

kubectl apply -f mongodb-secrets.yaml
Enter fullscreen mode Exit fullscreen mode

Creating MongoDB Persistent Volume

We require volumes in K8s to store the data so that data is not lost when our cluster goes down.

In K8s, there are two objects which are required for creating volumes.

  • Persistent Volume Claims (PVC): Kubernetes looks for a Persistent Volume from which space can be claimed and assigned for a PVC. PVC works only if the cluster has dynamic volume provisioning enabled.

  • Persistent Volume (PV): A storage space which is provisioned by an administrator.

Create the following file and name it mongodb-pvc.yaml.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-data
spec:
  accessModes:
    - ReadWriteOnce 
  resources:
    requests:
      storage: 1Gi
Enter fullscreen mode Exit fullscreen mode

Run the following command to create the PV.

kubectl create -f mongodb-pvc.yaml
Enter fullscreen mode Exit fullscreen mode

P.S: If your cluster doesn't support PVC then follow the following steps.

Create mongodb-pv.yaml file and insert the following into it.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-data-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi
  hostPath:
    path: /data/mongo
Enter fullscreen mode Exit fullscreen mode

Then run the following command against your K8s.

kubectl create -f mongodb-pv.yaml
Enter fullscreen mode Exit fullscreen mode

Deploying MongoDB image

We are going to use the official Mongo image from Docker hub.

Insert the following into a file and name it mongodb-deployment.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mongo
  name: mongo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  strategy: {}
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
      - image: mongo
        name: mongo
        args: ["--dbpath","/data/db"]
        livenessProbe:
          exec:
            command:
              - mongo
              - --disableImplicitSessions
              - --eval
              - "db.adminCommand('ping')"
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 6
        readinessProbe:
          exec:
            command:
              - mongo
              - --disableImplicitSessions
              - --eval
              - "db.adminCommand('ping')"
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 6
        env:
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongo-creds
              key: username
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongo-creds
              key: password
        volumeMounts:
        - name: "mongo-data-dir"
          mountPath: "/data/db"
      volumes:
      - name: "mongo-data-dir"
        persistentVolumeClaim:
          claimName: "mongo-data"
Enter fullscreen mode Exit fullscreen mode

Then run the following command against your K8s.

kubectl create -f mongodb-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Running MongoDB from Shell

Now that we have successfully deployed the MongoDB instance on our cluster let's access the database through shell and run commands against it.

Save the file below as mongodb-client.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mongo-client
  name: mongo-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo-client
  template:
    metadata:
      labels:
        app: mongo-client
    spec:
      containers:
      - image: mongo
        name: mongo-client
        env:
        - name: mongo-client_INITDB_ROOT_USERNAME
          value: 'dummy'
        - name: mongo-client_INITDB_ROOT_PASSWORD
          value: 'dummy'
Enter fullscreen mode Exit fullscreen mode

Now run the following command to deploy the client.

kubectl create -f mongodb-client.yaml
Enter fullscreen mode Exit fullscreen mode

Now let's ssh into the client.

kubectl exec deployment.apps/mongo-client -it -- /bin/bash
Enter fullscreen mode Exit fullscreen mode

Now inside the ssh, let's enter the MongoDB instance using the username and password created above.

mongo --host mongo-nodeport-svc --port 27017 -u admin -p dok8smongo
Enter fullscreen mode Exit fullscreen mode

And finally execute the command in the database to verify that MongoDB has been successfully deployed.

show dbs
Enter fullscreen mode Exit fullscreen mode

If it returns a successful response, then we can be sure that the deployed instance is running successfully.

Connecting to MongoDB from external Apps

Now that our database instance is running and we can even run commands inside our database, let's move forward and see how we can connect our Backend App created using NodeJS, Python etc and utilise our Database.

First we need to create svc for our K8s cluster. Save the following into a file and name it mongodb-nodeport-svc.yaml.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: mongo
  name: mongo-nodeport-svc
spec:
  ports:
  - port: 27017
    protocol: TCP
    targetPort: 27017
    nodePort: 32000
  selector:
    app: mongo
  type: NodePort
Enter fullscreen mode Exit fullscreen mode

Now create the svc using the command:

kubectl create -f mongodb-nodeport-svc.yaml
Enter fullscreen mode Exit fullscreen mode


We have successfully created the service which is exposed to the globally on port 32000. Now we have the username, password, port of our Database. But hold on we still need the IP/host of our hosted K8s cluster node so that we can connect to it.

So to find out the IP of our node run the following command.

doctl compute droplet get <cluster-node-id>
Enter fullscreen mode Exit fullscreen mode

You should be getting a Public IP displayed after running the command. Use that IP to connect to your MongoDB on K8s Cluster from external apps.

To connect your external app your DB URI should be like:

mongodb://<username>:<password>@<Public_IP>:<port>/?authSource=admin&readPreference=primary&appname=MongoDB%20Compass&directConnection=true&ssl=false
Enter fullscreen mode Exit fullscreen mode

Congratulations you have successfully setup MongoDB on K8s Cluster using DOKS.

Credits: How To Deploy MongoDB on Kubernetes – Beginners Guide

💖 💪 🙅 🚩
somsubhra1
Somsubhra Das

Posted on November 28, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related