Deploying a Microservice on Azure Kubernetes (with Let's Encrypt)
Ian Knighton
Posted on February 28, 2019
(I also wrote this on my blog. It would be cool if you checked it out. Even though there's nothing there right now.)
I recently had to struggle through this at work and I wanted to share my documentation in case it can help someone in the future. This is a very, very dry post, but it should cover most of the basics.
The Problem
One of our developers just created a new microservice in golang
and we needed to deploy it into a Kubernetes cluster in order to take advantage of the scaling capabilities. The service needs to be accessible through the web and be protected by an SSL cert. At this point, the appplication is able to be run using docker-compose
and is completely ready to be deployed.
The Solution
Anywhere you see <something>
is an indication that a variable will be used. You'll need to keep track of these throughout the process as many are used repeatedly.
Dependencies
The following tools will need to be installed and functional on your machine.
Process:
Create a Resource Group
To create a resource group:
az group create --name <resourceGroup> --location <location>
Create a Cluster
az aks create \
--resource-group <resourceGroup> \
--name <clusterName> \
--node-count 1 \
--enable-addons monitoring \
--generate-ssh-keys
This command will take a few minutes (sometimes ~10) to run and at the end will return a JSON-formatted output with the settings for the cluster. Copy and save this information.
Connect to the Cluster
az aks get-credentials --resource-group <resourceGroup> --name <clusterName>
Once this is complete, verify your connection.
kubectl get nodes
That should come back with an output listing the nodes with a status of Ready
NAME STATUS ROLES AGE VERSION
aks-nodepool1-8675309-0 Ready agent 2d v1.9.11
Initialize Helm/Tiller
Create a Service Account
Create a file called helm-rbac.yaml
in your working directory with the following YAML.
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Create the account with kubectl
.
kubectl apply -f helm-rbac.yaml
Configure Helm
Initialize tiller on the cluster for helm to connect.
helm init --service-account tiller
Create an Ingress Controller
helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=2
This process will create an IP address for the cluster, we'll need that going forward.
To find the public IP address:
kubectl get service -l app=nginx-ingress --namespace kube-system
From the output of that command, you will need to EXTENRAL-IP
of the LoadBalancer
service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
invincible-toucan-nginx-ingress-controller LoadBalancer 10.0.186.72 192.167.15.243 80:30947/TCP,443:32654/TCP 2d
invincible-toucan-nginx-ingress-default-backend ClusterIP 10.0.173.78 <none> 80/TCP
Install Cert-Manager
(Quick edit: It appears that there may be an issue with Cert-Manager version 0.6. This documentation was written against version 0.5.2
, so I updated this command to specify the version.)
helm install stable/cert-manager \
--version 0.5.2 \
--namespace kube-system \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer
Create a Cluster Issuer
Create a file in your working directory called cluster-issuer.yaml
.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <emailAddress>
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
Use kubectl
to create the cluster issuer.
kubectl apply -f cluster-issuer.yaml
Create a Certificate Object
Create a file in your working directory called certificate.yaml
and add the following information.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: tls-secret
spec:
secretName: tls-secret
dnsNames:
- <url>
acme:
config:
- http01:
ingressClass: nginx
domains:
- <url>
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
Use kubectl
to apply this certificate.
kubectl apply -f certificates.yaml
Create Container Registry
az acr create --resource-group <resourceGroup> --name <registryName> --sku Basic
Verify you are able to login to the registry. The credentials can be found on the "Access Keys" blade in the Azure Portal.
az acr login --name <acrName>
Grant Access from Cluster to Registry
Create a file called grant-access.sh
in your working directory with the following information:
#!/bin/bash
AKS_RESOURCE_GROUP=<resourceGroup>
AKS_CLUSTER_NAME=<clusterName>
ACR_RESOURCE_GROUP=<resourceGroup>
ACR_NAME=<registryName>
# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID
Save and Run the script and a connection should be allowed between the two.
Build and Push Docker Image to Registry
Change directories to the repo and build the container.
docker build -t <imageName>:<imageTag> .
Once the build has been validated, tag and push it to the repository.
docker tag <imageName> <acrName>/<imageName>:<imageTag>
docker push <acrName>/<imageName>:<imageTag>
Deploy Images to Kubernetes Cluster
The docker-compose.yml
file already exists in the registry, so it only needs to be validated.
docker-compose up
Assuming everything worked, convert to Kubernetes deployments/services using kompose
.
kompose -f docker-compose.yml up
Verify deployments and pods were created and are running.
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
<service> 1 1 1 1 4h
kubectl get pods
NAME READY STATUS RESTARTS AGE
<service>-85b87ddc6-bfm7j 1/1 Running 0 4h
Create an Ingress Route
Create a file called ingress-route.yaml
and add the following:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- <url>
secretName: tls-secret
rules:
- host: <url>
http:
paths:
- path: /
backend:
serviceName: <service-name>
servicePort: 3000
Apply the change with kubectl
.
kubectl apply -f ingress-route.yaml
Testing
Validate everything has worked by navigating in a browser to your URL. In my experience thus far, it can take around 20 minutes for the changes to propagate out through the internet. This may cause some weird behaviors.
One indicator is if you see a pod running (kubectl get pods
) for cm-acme-http-solver
that doesn't normally show up. That means it's still working on gathering a certificate from LetsEncrypt.
Resources:
- Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI
- Install applications with Helm in Azure Kubernetes Service (AKS)
- Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
- Translate a Docker Compose File to Kubernetes Resources
- Custom domain and Azure Kubernetes with ingress controller AKS
- Push your first image to a private Docker container registry using the Docker CLI
Posted on February 28, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.