From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD — Part[2]
Valon Januzaj
Posted on February 3, 2023
From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD — Part[2]
This is the second part of the series: From local development to Kubernetes — Cluster, Helm, HTTPS, CI/CD, GitOps, Kustomize, ArgoCD. You can find the [PART 1] here
This part includes:
Introduction to GitOps — ArgoCD installation
Using Kustomize to write Kubernetes manifest
Secret management in Kubernetes with SealedSecrets
Add a basic Continuous integration pipeline with GitHub actions
Creating and running our services in ArgoCD
Intro to GitOps — Continuously integrating and deploying applications with ArgoCD, Kustomize, and GitHub Actions
If we go to the GitHub repo you can find all the manifests that we worked on so far: Deployment, Service, ClusterIssuer, Ingress, and Secret. This is okay at some point… but look at that secret.yaml file, which is very unlikely to be there, especially when it’s encoded with base64 when we know that is so easy to decrypt. Aside from that, it’s so hard to separate environments: production, staging, development, etc. There can also be lots of inconsistencies, where I can have some manifest locally and apply it, and another version of that manifest exists in the repo, so it’s very hard to reproduce the same environment if we delete everything.
Ideally, we would love to have a reproducible environment where everyone that works on infra knows the state of an application and can easily make changes to the manifest while everyone sees the changes, so let’s aim for this!
Intro to GitOps — What is GitOps
GitOps is a way to manage infrastructure and applications using Git as a single source of truth. It’s a method to use Git as a centralized source of truth for declarative infrastructure and applications. The idea is to use Git to store the desired state of the infrastructure and applications, and then use automation tools to ensure that the actual state matches the desired state. This approach helps to ensure that the infrastructure and applications are always in a known, good state, and it makes it easy to roll back changes if something goes wrong.
GitOps solves several problems in software development and deployment, including:
Version control: By using Git as the central source of truth, GitOps allows teams to easily track changes and roll back to previous versions if necessary.
Collaboration: GitOps allows multiple people to work on the same codebase and infrastructure, making it easier to collaborate and share knowledge.
Automation: GitOps uses automation tools to ensure that the desired state of the infrastructure and applications is always in sync with the actual state, reducing human error and increasing efficiency.
Auditability: With GitOps, every change is tracked and auditable, making it easier to understand how and why changes were made.
Speed: GitOps allows for faster deployment and rollback, as well as faster iteration and experimentation, as teams can quickly and easily test new features and changes.
Scalability: GitOps allows teams to scale their infrastructure and applications easily and efficiently, with the ability to easily add and remove resources as needed.
Gitops is a methodology that uses Git as the single source of truth for infrastructure and application deployments. It is not a specific tool, but rather a way of organizing and managing the deployment process. ArgoCD is a tool that implements the GitOps methodology by automating the deployment and management of applications and infrastructure in a Git-based workflow.
What is ArgoCD — Installing ArgoCD in the cluster — Implementing GitOps methodology
ArgoCD is an open-source GitOps tool that automates the deployment of applications to Kubernetes clusters. It uses Git as the source of truth and continuously monitors the state of the cluster to ensure that it is in sync with the Git repository. ArgoCD also provides a web-based UI that makes it easy to view and manage deployments.
To install ArgoCD we might use the same methodology as before, installing from the helm chart or directly applying the manifests that ArgoCD has published.
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
After a couple of minutes, ArgoCD should be installed and we can port-forward the service and access the ArgoCD UI locally, so let’s do that:
$ kubectl port-forward --namespace argocd svc/argocd-server 3000:443
# Get password - Use this password when loging in from the UI
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Now visit localhost:3000 and log in:
username: admin
password: the password that got output when you executed the command above
Normally when working in GitOps, there is a known pattern where you have another Git repository where you keep all the manifest/infrastructure related resources in a declarative way. For this one I am going to create a new repository named kubernetes-demo-gitops.
➜ ~ gh repo create kubernetes-demo-gitops --private
✓ Created repository vjanz/kubernetes-demo-gitops on GitHub
I will create an Argo AppProject so we can separate the application and not leave them in a default project. Think of an AppProject as a namespace in Kubernetes, just as a layer to isolate the resources.
I will push argo-project.yaml to my new repo in projects/fastapi-app.yaml so the repo is not empty. Now my GitOps repo will look like this:
.
└── projects
└── fastapi-app.yaml
Now let’s connect the GitHub repo that we just created on ArgoCD. On ArgoCD UI navigate to Settings > Repositories > Connect Repo
We need to generate an SSH key and add it to the repository (the public key) and we need to add the private key in ArgoCD:
➜ .ssh ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pc/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pc/.ssh/id_rsa
Your public key has been saved in /home/pc/.ssh/id_rsa.pub
Now we need to do two things:
Add the public key that we generated to GitHub deploy keys
Add the private key on ArgoCD
Copy the public key that ends with the extension .pub and create it in the repository on GitHub:
Now navigate in ArgoCD and add the private key of the same key that you generated.
cat ~/.ssh/id_rsa
Now the repo is connected and we can continue to write the manifests. Argo CD supports a wide variety of templates, including:
Kubernetes manifests in YAML or JSON format
Helm charts
Kustomize bases and overlays (We are using this one)
JSONnet and Jsonnet templates
Ksonnet and ksonnet-lib
Templates in the Open Policy Agent (OPA) Rego language
Argo CD’s own JSONnet library (argocd-lib)
Install Kustomize from here
What is Kustomize — Convert manifests in Kustomization way — Start using SealedSecrets to manage secrets
Kustomize is a tool used to customize Kubernetes manifests, which are files that define the desired state of a Kubernetes cluster. It allows you to modify and extend existing manifests, or create new ones, without having to write everything from scratch.
For example, you may have a base manifest that defines the deployment of a certain application, and you want to use that same manifest in multiple environments, but with some slight variations. With Kustomize, you can create a separate “overlay” for each environment, that specifies the specific changes you want to make to the base manifest, and then apply those overlays to the base manifest to generate a final, customized version that you can use to deploy the application.
Now we’re going to take the plain manifests that we wrote at the beginning: deployment, ingress, service, etc. and convert them into the format of Kustomize.
In our GitOps repo create a new directory named apps. This would be the directory where we would list all the apps that we want to manage with ArgoCD. Remember, your Git repo according to GitOps can (and should) manage all the projects and infrastructure for your projects or organization. Copy the deployment and the service that we created in previous parts to apps/fastapi-service/base without making any modification for now. The repository structure should be like this.
├── apps
│ └── fastapi-service
│ ├── base
│ │ ├── deployment.yaml
│ │ ├── kustomization.yaml
│ │ └── service.yaml
│ └── overlays
│ ├── development
│ └── production
├── argocd
└── projects
└── fastapi-app.yaml
On the base, I added resources that will be part of any environment, but in overlays > development and production I will add stuff that is related only to those environments. Simply said, add everything that is common for all the environments on base and anything that is specific to the environment to the overlays/
Let’s build the base first. In the base directory, I am going to paste the deployment and service as they’re and then we’ll do some modifications. You can also see that I’ve included a new file named kustomization.yaml where I define which resources I want to include.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
Normally when you create a new programming feature and push it to the GitHub repository we want to deploy that change. What happens is we create a new docker image and we push it to the registry. After that, we update the Kubernetes deployment to use the new image. So image has to be changed dynamically for example vjanz/kubernetes-demo:v1 can be one version and another one can be vjanz/kuberentes-demo:v2-my-feature
This is where the beauty of Kustomize comes in, as we can put some placeholders in our manifest and then update them as we want. Edit base/deployment.yaml and update:
containers:
- name: kubernetes-demo
image: valonjanuzaj/kubernetes-demo:latest
to:
containers:
- name: kubernetes-demo
image: example-image
I updated the image name to my-image which doesn’t make so much sense now, because that is not even a valid image name, but now we can use Kustomize to update the my-image to something else in an indirect way.
# change directory to base
$ cd apps/fastapi-service/base
$ kustomize edit set image my-image=valonjanuzaj/kubernetes-demo:v2
This command won’t change deployment.yaml directly, but it will update the kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
# It will add the following part
images:
- name: my-image
newName: valonjanuzaj/kubernetes-demo
newTag: something
Next time we use Kustomize to build a specific environment, the tool will look at kustomization.yaml file and it will update the values accordingly, let’s check it as we build the base environment manifests with kustomization:
# Assume we are in the base directory
$ kustomize build apps/fastapi-service/base
If you see the output and look carefully, you can see that the image is not my-image but it’s whatever we edited before:
....
- envFrom:
- secretRef:
name: demo-secrets
image: valonjanuzaj/kubernetes-demo:something # Updated by Kustomize
name: kubernetes-demo
ports:
- containerPort: 8000
Awesome isn’t it? — Now let’s imagine a scenario where CI (Continuous Integration) pipeline builds a new docker image and tags it with some random hash for example valonjanuzaj/reponame:0447995 then we can easily update the manifests with kustomize and use the newly generated image (we’ll do exactly this when we implement the CI/CD pipeline in the beginning).
Now remove the images key from kustomization.yaml that is on the base directory, because we will start to create our development environment in our GitOps repo. Onoverlays/development we create a new file named kustomization.yaml which holds a reference to the base file plus you can add any additional resources:
resources:
# references everything in base directory, as we want to include them
- ../../base
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubernetes-demo-dev # we put the namespace here
Now we have imported deployment and service which are common for all the environments, but we see that we added a namespace to separate resources and we will add specific resources that only make sense for a specific environment. Let’s start by adding ingress for development.
Now we need to create an A record for this subdomain, but I will not go through how to do it as It’s explained above. Let’s add this in kustomization.yaml which is in the development directory.
resources:
- ../../base
# added ingress to kustomization on /overlays/development as
# I want to use this ingress only for development
- ingress.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubernetes-demo-dev
In the first part when we deployed the application in the traditional way, we say that secret management becomes hard as it was easy to decrypt a base64 value and it wasn’t safe at all to push the secrets in the repository. In GitOps methodology the secrets should also be part of the repository as we want to build a system that is easily reproducible. So let’s find a way to make the secrets safe even when they’re in the repository.
Secrets management in GitOps — SealedSecrets
SealedSecrets is a Kubernetes-native solution for managing secrets using GitOps. It allows you to encrypt sensitive information like passwords, API keys, and certificates and store them in your Git repository while keeping them securely encrypted at rest and in transit.
To start working with SealedSecrets we need two things:
Install SealedSecrets in the cluster
Install kubeseal locally (CLI tool) to encrypt the secrets
To install SealedSecrets in the cluster we can install it again through a helm chart:
➜ ~ helm search repo sealed
NAME CHART VERSION APP VERSION DESCRIPTION
my-repo/sealed-secrets 1.2.1 0.19.3 Sealed Secrets are "one-way" encrypted K8s Secr...
sealed-secrets/sealed-secrets 2.7.3 v0.19.4 Helm chart for the sealed-secrets controller.
# Installation
$ helm install sealed-secrets my-repo/sealed-secrets --namespace kube-system
The command will install a controller in the cluster in kube-system namespace and it will also create a certificate that will be used to encrypt the secrets. This is great because even though we commit the secrets in the repo, the secrets are encrypted with a certificate that exists only in our cluster, so they cannot be decrypted with a random certificate.
To install the kubeseal tool locally, look for the instructions here. After you install the kubeseal, we can easily create secrets that we can push to the repository. Let’s grab the secret for Postgres that we created earlier and let’s convert it to a SealedSecret.
Our old secret.yaml looks like this:
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: demo-secrets
type: Opaque
data:
POSTGRES_USER: cG9zdGdyZXM=
POSTGRES_PASSWORD: TDVYT3lacTViUg==
POSTGRES_PORT: NTQzMg==
POSTGRES_DB: a3ViZXJuZXRlcy1kZW1v
POSTGRES_SERVER: cG9zdGdyZXMtcG9zdGdyZXNxbC5wb3N0Z3Jlcw==
Now let’s create a version of this but with kubeseal tool, which uses the certificate that exists in our cluster to encrypt the data:
kubeseal \
--controller-name=sealed-secrets \ # name of the controller
--controller-namespace=kube-system \ # namespace where controller is
--scope cluster-wide \ # To allow decryption from all
--format yaml < secret.yaml > sealed-secret.yaml # Output file
Now the generated sealed-secret.yaml can be added to the GitOps repo as it’s encrypted using the certificate that is inside the cluster, and the file would look something like this in my case:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
annotations:
sealedsecrets.bitnami.com/cluster-wide: "true"
creationTimestamp: null
name: demo-secrets
spec:
encryptedData:
POSTGRES_DB: AgBLjgBM7CIlaEnxTWVedkmhd5HgO+Ep9HUNfwGNLe4K7tFS540xoVvvwV9g7UZJM547dcn3F5thfKal4ilah4UixQ1Y5w9ZG38jf4zo1AwiXaV+1YdvXjC7NLRAQhh3Ya8bwJT7QOJRS0vGioJRWkB9BY5JUHlgTJHvVcuMdwoD0vR34M3Z5XswrMr+uBBLyasrDSKtrhhIOxGTsMHtYWzfWm2UiJRp/s6hnsZG4N5IAFDB8HYcMCWGkTtZ3DGsX6XD30JrK6txpGmRb4PjYIFiKtFgp3uKWHS4XN2rgiK0VdpvgdZbgLVclX24NK+o/P+75cwHyVP6aGY9DIWFmovj3afaEfVYcnO91EC0l0V716HE6q3lKB204hpsZ3ioTPV+9MSzW6YixafX2t+J2wQiUd8q996v5lNWomRPdyjw0P2lXzJUbjkQjHeMK2UBu7xLz/ODb7QDhQpOnKGm/Wz1Tj/brd6vpDWVdntY8+9KDW1n3e6E4po8P2P09ihP4OtkFbD2jKC/56FUvV5y3wlP5XJxn9jqoZJBcq4PGS3cpngjUSimOfc9WpvG3wLhpXSFLlVXrWxoPeklBcX++sy4blFQ34JHXiD+48qk5GY7ceW45PcJOQv6REUUo2OCdWt9KUqrWiC6WgM1STJ7sScWNHL0ito9N3eNm+T8nwaI/jNwSLNCQNHkIUId8t86w9NqclSUsda1xQuGUqj/JU8=
POSTGRES_PASSWORD: AgCBdPMRPxax/teLQw9fzLiYnXcDOZY+6Ly+eqem2qIePzg65guoTkqnMaQnCi3veUIi4RlimiMBoLxhMe6sHBX2LrEESfbtf2nRtZrs69QNG1lvOKMylXNpNHYISEKh6D3GWApcdYM/phXr0QbMZY6+CP0dMAn3tPTXBj1HZ3MJgwZYMKnKdAQbY49FprHwO0N28te96IvqagdlEIWKkYXBtazHG7lAIJDKfleHDyWa1FLjbtrjb+oXbx3eBd/scKagYdZc/I7EkelbMKNuzGgMRnKjaN/fez1dvwnzPWqRKgiAMQP05jfO15bjOWGqlwU2UFd1RQuB1gzrJViDo3tWI7vYXpegIWbBPes1jCC3y5hybxprGoWMkiMRXmj+anVbLRl1ZH+SRcZldCUOTzhUFI9J/vc2rb6kOj+aetR0eJKrhZ6/SkR5Sa9kHzakUDROmdC9cIzSVfZ3RA1aBSs56JCX7gLvDndPGpT/BFfMMt41DiA6O7TtM3CEM/qB+YDs9XFJVPsDlHkdMziv0bAR5jRNQTa5xCTSMt6VU/ef0+415pv1iJqau85TK5hSptSq/3Fn6ARhTtcw1RpvY3USd8PDVHMbQkdLW5SnEAFp37WUrjjqi7VcrGcVNGwQZbAzzyg4ns3EQ3p3TU4uXzbcTHeLHfzA/NKDRAzqMV1d3PWPlMuDfVqfQaXBPv8LKeFMWNt75URrUov4
POSTGRES_PORT: AgCLN36HvCVsCc+7MT8IwP9bpcTJcoqh80USdqsUmhwEFYhzAo8Kux3/gwxnghDDvEya9WCQSAEbuAD6hX4Yo7T+sbSD5+oxDRZAU+YFPdYjAJs0tOhMAM2AwAmj1cJLoGFRUqqCFI2uFRExB1nJkr1e0QOgingnnLWPPvUIP0v/Gj3Bh/+FC925LppZcjJxuJ9xyRYuj1bqLoqAmw9YtXPYOArlAYn1t1+xDseSxvAYo7UU1+QCx82zBZVyXnEpyPaGjKsqIE9O4MaV4g62W7VbBNtRbK7lCinggjFzQLv/T8s0IVgmqGMtou4oPamtlZN8OThUZF2W5B+PBBBHsKXIiOAoWVCF27x3mEC7OLlpRwwVpic4y9nDHkLLg2V0Wpcnu8m41voyjQywT8fDP2ogl3sDHeUpouG2UduumWz4PZpDyNBriJ9cZUwa+de00mLftA170scDBqqw3hTkmvnbwoy/+L6mYjJn1/yl1lUUMd4ezYm1Ki7dwRzrXfvy/zIpHHQt2T1Av2JHpCEhlW8DoBPAP5C0nobUUSRxNqvBgeN81GRORohsfKjs76wwSJyyOXuU4Y9eLD0JDZuJ9aei6T/jAd8nubcef7jw2pwIuAQ5RrCGot8mHWQwXjd52/XWqyVUzcksqnMcsuK7u93/SKcKZb9tr2wMzLw75PjYRXYrTblyH5DAmOAhNVjLzculWGDZ
POSTGRES_SERVER: AgBTTiwCjdNsETS/9aoJzvSVtPsUWfY5ZSnHmsDxQxgaPR1TWbZx1iNuZjOIw0XZm6T3OBnGoVKq09kQdMS0DOvOtZ+XNoP7S+88Ee82lyymMZyCBDlMcAUyHxR6Xa/RpqE1IFtZB5m/aubN23A9vevZkH73cwWTwl/CTVmsb9x0dY6W3NExOG5FQ7HaOsTrnyirDZSyLRYGnYNCeqzY1OFPiPQLcyYJoFwDfATQ7e7x0O3S6vhnj/KeUCxunsMpSsIavjdo/t8DgFtkUhNaWfCr3LWB4WdL2uIeCs8gebyzO7xaxR+/XKCHSrH9WeLHkQknwwfVdWFidGMtUXLvUvOM26EsrmIcAnioD6rxpRtIszWuDYNAl7qdk+s4WsXJFWuiUzALNioWuwUGDlICb6ViWGdlbTXI2W8PYQFHiuCTByGk93hc46T+jdsiM+gxzik5FdhFMAnsqZLzkvfqJfeBT5Sr/+AGfjke/SH5ses/KB+61NtCRiBwaL10S73KwKmzk6wC/zBv1sEICWJhf08Z+VU2q76HcJJXu9Ll66uvo/YWViNPR1W7Rt881QGzof1/MEf3Rc1xy0Ni65Z87mQEMs68wzjLb2eLpPk5x3AAPgjGVQw1CVgnutoOlwZevwayCP/5kNIE5Bzhm4pgx41sjeBItqZxvkXqgNpBIfcxKPs6rECgLRws1tRNId0xLhkkvfma5ckpC0M4UlagplpnfuriNzOnZv3MZ+q2
POSTGRES_USER: AgCTJA1LSrYSdLn/3IrGyxjbI3LXIu5sN+Swt84RsLDgOvqAuqJ9aL8YHotWwumUBpxxdm3MdCUoTcWqnTgUSst9hRvgrO72YQr9ej2YBe5CBbPXbO6Y7PQNbm0rz73AKo7HAI5GOP77Kd/o2ovos9f1dWLayI7+6HDvl4FjBCHTQ8B+e2kJnBHSB8/P6PdGAOks5qMBK8hCMu9gUjpxygGgZDAiW+ITInbzKABh+6AbMgXSl7WRGZVgwGZ1Mlezh13rDOmMhy/68j2s5HaOlgnvGmJYqyS3k5NegO7nhTwvlqzQCI/zuca8e+PpoedIa6XG3c6Y91psrIwZn2e7KT2z5sXUSRq/IX+PnH+Qx+SJIJP/0UL2cuVal+DXyr1jA2lZSizFBW5ZpLteDVRMFcjwVj0gaz5EUkpeGJPRJa/yQVi/KnB/KHRx8VxxE6k2fDY3NKb9hsx6E0Mjwsc9fdHC5W18pbwc/QGiN7bO9WSCusoibletolLS9eS7YBEORG+4LiYyhzI9KAzp3a9FZu15R4CZW6QNvxjo7sOAMkH71U8JNpz1h77bpKelEgPpVPtS9WCFQ8acGsJn+kVpxV28TdOPX4CKBdiuv/pSKvfyZ54nxZToJZuQ8hwjjUf2LXcqYIqvIR7Oe5wO7JFobKIXHUVcPndhY5SPGjShH9+HhaTvAL1h6DkkeYsPMtoJ47vLgMDDgtnqdA==
template:
metadata:
annotations:
sealedsecrets.bitnami.com/cluster-wide: "true"
creationTimestamp: null
name: demo-secrets
type: Opaque
Now I will be adding this file to the GitOps repo on overlays/development with name sealed-secret.yaml and I will update the kustomization file to include this resource my kustomization file will look like this:
resources:
- ../../base
- ingress.yaml
- sealed-secret.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kubernetes-demo-dev
Let’s explore other features of kustomize by doing something that you may need. On the base deployment, we have 2 replicas of that pod, but what if we want to scale up the number of replicas but only for specific environments (let’s say development)? We can create a patch and we can apply that one only for development. So let’s create a new file named replica-count in development, as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-demo
spec:
replicas: 3
and in kustomization, modify as:
resources:
...
# you add this key
patchesStrategicMerge:
- replica-count.yaml
...
namespace: kubernetes-demo-dev
Now if we check with kustomize:
$ kustomize build apps/fastapi-service/overlays/dev
we will see that the number of replicas for the development environment has changed to 3. This is the power of kustomize when it comes to separating the environments in style where you can extend what you want, but also override anything you want.
At this point, we have completed all the necessary resources to replicate a development environment using ArgoCD and Kustomize. Now we won’t apply these manifests through kubectl, or in any manual way because we want to keep everything consistent and reliable so instead, we are going to let ArgoCD apply these manifests and create them accordingly. Now we need to create a Continuous Integration pipeline for our sample app that builds the docker image, pushes it to the registry, and updates the GitOps repo with the name image name in order for ArgoCD to get the changes and deploy the new version of the application.
Continuous Pipeline with GitHub Actions — Continuous Delivery with ArgoCD
In the GitOps world, the CD part is normally handled by a tool that implements GitOps, like ArgoCD or Flux. We are using ArgoCD, so basically, ArgoCD is connected with the repo that holds the manifests, which I am referring to as the GitOps repo and when there is a change, ArgoCD synchronizes the change automatically in the cluster.
Let’s go to our codebase and create a CI pipeline. GitHub excepts the workflows to exist under .github/workflows so let’s create the directories and the respective files and also a branch for development:
➜ kubernetes-demo-app git:(main) ✗ mkdir -p .github/workflows
➜ kubernetes-demo-app git:(main) ✗ touch .github/workflows/workflow.yaml
➜ kubernetes-demo-app git:(main) ✗ git add .
➜ kubernetes-demo-app git:(main) ✗ git commit -m "Add workflow files"
create mode 100644 .github/workflows/workflow.yaml
➜ kubernetes-demo-app git:(main) git push
# I created a new branch as I want to associate development with the
# overlay that I created on GitOps repo
➜ kubernetes-demo-app git:(main) git checkout -b development
Switched to a new branch 'development'
➜ kubernetes-demo-app git:(development)We are going to use GitHub actions to build our pipeline. GitHub Actions is a powerful tool for automating software development workflows. It allows you to trigger actions based on events in your GitHub repository, such as commits, pull requests, and releases. In this case, we will use GitHub Actions to trigger a deployment to our Kubernetes cluster every time a change is pushed to the master branch.
Our CI pipeline has two goals:
Build the docker image and push it to the registry
-
Update the repo that we use to store manifests with the latest image that is pushed
Let’s explain the workflow a bit. There are three jobs: build: This job runs on an ubuntu-latest environment and checks out the code from the repository. It then sets up Python 3.9 and installs any dependencies specified in the requirements.txt file.
build-and-push: This job also runs on an ubuntu-latest environment and sets up QEMU and Docker Buildx. It then logs in to Docker Hub using the username and token stored as secrets, and builds and pushes a Docker image to Docker Hub with the tag valonjanuzaj/kubernetes-demo:$github.sha, where github.sha is the commit SHA.
update-manifest: This job checks out a separate repository named vjanz/kubernetes-demo-gitops, which contains the Kubernetes manifests and updates the development manifests in the $K8S_YAML_DIR/overlays/development directory with the new image version, which is built and pushed in the build-and-push job. The changes are then committed and pushed back to the repository.
Basically what happens is, each time we create a Pull request or we push in the development branch, a new image will be built and pushed to the repository. Aside from this, the pipeline will checkout in our GitOps repo to update the image with the latest one based on the environment that the Pull Request or push has happened.
There are some secrets that are being used in the workflow. To set up GitHub secrets, see the instructions here, and to set up GitHub personal token (PAT) see the instructions here. The secrets corresponding to Dockerhub login information and GitHub personal token to be able to access the other repository.
The updated manifests in the GitOps repository are then pulled by ArgoCD, which ensures that the deployed application in the cluster is in sync with the desired state defined in the GitOps repository. This way ArgoCD ensures that the application version deployed in the cluster is always up-to-date and aligned with the version in the GitOps repository.
That’s everything, we have set up our CI pipeline which will build, push the image and then update the repository which holds the manifests. After the update is done, ArgoCD will see the changes and it will update the cluster accordingly.
Check if the pipeline is working as excepted
Now let’s check if the pipeline that we build is working as excepted. Normally all the jobs should pass, the image should be built, and there should be a commit in the GitOps repository with the new image tag.
Let’s make a change in one of the routes and push the code!
@app.get("/health")
def health():
return {"status": "App is running!!"}
$ git add .
$ git commit -m "Update /health endpoint"
$ git push
If everything is set up correctly (keep an eye on GitHub secrets) the workflow should be complete without any errors. If we check the GitOps repo we should see an update on overlays/development since we pushed on the development branch. We have configured that when a change is made in the development branch we deploy to a development environment on Kubernetes and we can configure it so that when we push to the main we update the production environment (this is based on your preferences).
Perfect, so now let’s just add our application to be monitored byArgoCD as this is the only step missing in the picture.
Setting up the application on ArgoCD — Automating the whole workflow!
Now that the continuous pipeline is set up and the application is continuously being integrated, all we need to do is to set up the ArgoCD application, which will listen for changes in a specific directory, and if there is any change it will automatically deploy. In our case, we have added our manifest at: apps/fastapi-service/overlays/development.
There are at least two ways to add an application in ArgoCD:
Configuring all the options from the ArgoCD UI.
Adding the application in a declarative way (I recommend this one)
Creating an Argo application in a declarative way is generally considered to be better than creating it from the UI for a few reasons:
Reproducibility: Declarative manifests allow you to version control your application configurations, making it easy to roll back to a previous version if something goes wrong.
Automation: Declarative manifests can be easily automated, allowing for repeatable and consistent deployments.
Audibility: Declarative manifests provide a clear and concise representation of the desired state of the application, making it easier to understand and audit the configuration of the application.
Portability: Declarative manifests can be easily ported across different environments, allowing for simpler migration and disaster recovery.
Easier to scale: Declarative manifests can be easily scaled up or down with minimal changes, making it easier to manage the application as it grows.
Below find the Argo application for our application in the declarative form:
Let’s explain what those configurations mean:
name — the name of the application that is installed on ArgoCD (can be anything)
project — name of the project we want this application to be associated with
repoURL — the repo that we have linked with our ArgoCD (where we keep the manifest, GitOps repo)
path — Where is the application manifest located
destination, server — With ArgoCD you can manage multiple clusters, so in this case I am telling ArgoCD to install on the same cluster where ArgoCD is installed
namespace — Which namespace do we want the application to be deployed at
syncPolicy, automated — means that any change will be automatically synchronized without any manual interventions
the other configuration seems quite self-explanatory, so I will not go on for each of them.
Let’s create the application:
$ kubectl apply -f fastapi-service-development.yaml
Now the structure on the repo looks like this:
├── apps
│ ├── argocd
│ │ └── fastapi-service-development.yaml
│ └── fastapi-service
│ ├── base
│ │ ├── deployment.yaml
│ │ ├── kustomization.yaml
│ │ └── service.yaml
│ └── overlays
│ ├── development
│ │ ├── ingress.yaml
│ │ ├── kustomization.yaml
│ │ ├── replica-count.yaml
│ │ └── sealed-secret.yaml
│ └── production
└── projects
└── fastapi-app.yaml
This is the application that we are deploying for a development environment. Note that if you want to separate another one for production, you should create another manifest for it too, giving its respective configurations.
Now if we head back to the ArgoCD UI, we can see that the application should be up and running!
Awesome isn’t it? — Now note that you cannot make any changes from outside the cluster, all the changes should happen from the Git Repository. For example, if we delete a pod from outside the cluster with kubectl or any other tool, ArgoCD will look at the state of the cluster and the state defined in the GitOps repository, and where there is a change between states, ArgoCD will automatically choose the one that is on GitOps repository and this is really good as you have only one source of truth when it comes to writing and managing these files.
Now let’s see if the changes are applicable if we change something, like changing the number of replicas. I am going to update this file to make the deployment have only one pod (1 replica).
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-demo
spec:
replicas: 1 # this was 3
Now let’s just push the changes to the GitOps repository:
$ git add .
$ git commit -m "Change replica count"
$ git push
and we can see that two pods will get immediately killed by ArgoCD as the state on the GitOps has changed compared to the one on the cluster. Keep in mind that these two should always be in sync!
Since we tried that, let’s try to change the number of replicas with kubectl and see what happens:
$ kubectl get deployment -n kubernetes-demo-dev
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-demo 1/1 1 1 30m
$ kubectl scale --replicas=3 deployment/kubernetes-demo -n kubernetes-demo-dev
deployment.apps/kubernetes-demo scaled
At first, you may think that you changed the number of replicas and everything is going to be fine but as soon as ArgoCD detects the change, will rollback the change to the state that is defined on the repository which was (1), and it will kill both newly created pods.
Wrapping up
So we have completed the full workflow, from creating the application and running it locally to Kubernetes using the best practices out there. I hope you find it helpful and I am sure that you learned a lot from this article as I dedicated a lot of time to writing it, using my knowledge regarding the topic and the problems related to it.
If you want to support my work, you can buy me a coffee by clicking the image below 😄
If you have any questions, feel free to reach out to me.
Connect with me on LinkedIn, GitHub
Links and resources
Github repo for this part:
https://github.com/vjanz/kubernetes-demo-app
Posted on February 3, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 29, 2024