5-Step Approach: Dry Run Kubernetes Resources with ProjectSveltos
Eleni Grosdouli
Posted on January 20, 2024
Introduction
Independent if you are a DevOps engineer or a Kubernetes administrator, we have all been in a position where we had to update a Kubernetes deployment to a later version in a Production environment. Even if the deployment was tested multiple times in the Test/Staging environment, there was always a fear that something might break or go wrong during the rollout.
Fear no more! Today, we will explore the Dry Run capability of ProjectSveltos and how it can help engineers and Kubernetes administrators to update deployments in Production environments with more confidence. Kubectl offers a "dry run" functionality, which allows users to simulate the execution of the commands they want to apply. Sveltos takes it one step further. You can launch a simulation of all the operations you would normally execute in a live run. The best part, no actual changes will be made to the matching clusters!
For today's demonstration, we will update Kyverno on an RKE2 cluster. If you did not have the chance to read the previous post about the Projectsveltos, have a look before you continue with this post.
Diagram
Prerequisites
For this demonstration, I have already installed ArgoCD, deployed Sveltos to cluster04 and created an RKE2 cluster. For the first two, follow step 1 and step 2 from the previous post.
- - - - - -+ - - - - - - - - - - + - - - - - - - - - - -+
| Cluster Name | Type | Version |
+ - - - - - - -+ - - - - - - - - - - + - - - - - - - - -+
| cluster04 | Management Cluster | RKE2 v1.26.11+rke2r1|
| cluster11 | Managed CAPI Cluster| RKE2 v1.26.11+rke2r1|
+ - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+
Step 1: Register the RKE2 Cluster and Add Label
We will use the sveltosctl to register cluster11 with Sveltos. For the registration, we need three things: a service account, a kubeconfig associated with that account and a namespace. If you are unsure how to create a Service Account and an associated kubeconfig, there is a script publicly available to help you out.
Registration
$ sveltosctl register cluster --namespace=projectsveltos --cluster=cluster11 --kubeconfig=cluster11.yaml
Verification
$ kubectl get sveltosclusters -n projectsveltos
NAME READY VERSION
cluster11 true v1.26.11+rke2r1
Cluster Labelling and Verification
To deploy and manage Kubernetes add-ons with the help of Sveltos, the concept of ClusterProfile and cluster labelling comes into play. ClusterProfile is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.
For this demonstration, we will set the unique label "env=prod".
$ kubectl label sveltosclusters cluster11 env=prod -n projectsveltos
$ kubectl get sveltoscluster -n projectsveltos --show-labels
NAME READY VERSION LABELS
cluster11 true v1.26.11+rke2r1 env=prod,sveltos-agent=present
Step 2: Create Kyverno ClusterProfile
Kyverno Helm chart (v3.0.9) and a Kyverno policy to disallow deployments that use the "latest" tag will get deployed to cluster11. The Helm chart and the Kyverno policy are defined in the same ClusterProfile. To push the Kyverno policy to cluster11, we will have to save the policy as a Configmap and pass it to the ClusterProfile.
Note: The configuration is done on cluster04 as it is our Sveltos management cluster.
Kyverno Policy
$ wget https://raw.githubusercontent.com/kyverno/policies/main/best-practices/disallow-latest-tag/disallow-latest-tag.yaml
$ kubectl create configmap disallow-latest-tag --from-file=disallow-latest-tag.yaml
$ kubectl get cm
NAME DATA AGE
disallow-latest-tag 1 4s
ClusterProfile
---
apiVersion: config.projectsveltos.io/v1alpha1
kind: ClusterProfile
metadata:
name: kyverno-disallow-latest-tag
spec:
clusterSelector: env=prod
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.0.9
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
policyRefs:
- kind: ConfigMap
name: disallow-latest-tag
namespace: default
$ kubectl apply -f "clusterprofile_kyverno_disallow_latest.yaml"
$ sveltosctl show addons
+--------------------------+---------------+-----------+----------------+---------+-------------------------------+-----------------------------+
| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
+--------------------------+---------------+-----------+----------------+---------+-------------------------------+-----------------------------+
| projectsveltos/cluster11 | helm chart | kyverno | kyverno-latest | 3.0.9 | 2024-01-20 17:20:21 +0000 UTC | kyverno-disallow-latest-tag |
| projectsveltos/cluster11 | kyverno.io:ClusterPolicy | | disallow-latest-tag | N/A | 2024-01-20 17:20:21 +0000 UTC | kyverno-disallow-latest-tag |
+--------------------------+---------------+-----------+----------------+---------+-------------------------------+-----------------------------+
Verification - Cluster11
$ kubectl get all -n kyverno
NAME READY STATUS RESTARTS AGE
pod/kyverno-admission-controller-65f76b4f47-gfrwp 1/1 Running 0 75s
pod/kyverno-background-controller-66d498dd5c-xlkcs 1/1 Running 0 75s
pod/kyverno-cleanup-controller-8689db777f-6nd72 1/1 Running 0 75s
pod/kyverno-reports-controller-84fd865d49-gcb2f 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kyverno-background-controller-metrics ClusterIP 10.43.63.55 <none> 8000/TCP 75s
service/kyverno-cleanup-controller ClusterIP 10.43.170.79 <none> 443/TCP 75s
service/kyverno-cleanup-controller-metrics ClusterIP 10.43.228.47 <none> 8000/TCP 75s
service/kyverno-latest-svc ClusterIP 10.43.80.134 <none> 443/TCP 75s
service/kyverno-latest-svc-metrics ClusterIP 10.43.184.235 <none> 8000/TCP 75s
service/kyverno-reports-controller-metrics ClusterIP 10.43.41.181 <none> 8000/TCP 75s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kyverno-admission-controller 1/1 1 1 75s
deployment.apps/kyverno-background-controller 1/1 1 1 75s
deployment.apps/kyverno-cleanup-controller 1/1 1 1 75s
deployment.apps/kyverno-reports-controller 1/1 1 1 75s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kyverno-admission-controller-65f76b4f47 1 1 1 75s
replicaset.apps/kyverno-background-controller-66d498dd5c 1 1 1 75s
replicaset.apps/kyverno-cleanup-controller-8689db777f 1 1 1 75s
replicaset.apps/kyverno-reports-controller-84fd865d49 1 1 1 75s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/kyverno-cleanup-admission-reports */10 * * * * False 0 <none> 75s
cronjob.batch/kyverno-cleanup-cluster-admission-reports */10 * * * * False 0 <none> 75s
Step 3: Update Kyverno to v3.1.4
Just imagine you want to update the Kyverno deployment to the latest version available. We will update the ClusterProfile above to enable the Dry Run capability.
Update ClusterProfile
---
apiVersion: config.projectsveltos.io/v1alpha1
kind: ClusterProfile
metadata:
name: kyverno-disallow-latest-tag
spec:
syncMode: DryRun
clusterSelector: env=prod
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.1.4
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
policyRefs:
- kind: ConfigMap
name: disallow-latest-tag
namespace: default
- In the above definition, we added the
syncMode: DryRun
to activate Sveltos DryRun mode
$ kubectl apply -f "clusterprofile_kyverno_disallow_latest.yaml"
$ sveltosctl show dryrun
+--------------------------+--------------------------+-----------+---------------------+-----------+--------------------------------+-----------------------------+
| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | ACTION | MESSAGE | CLUSTER PROFILE |
+--------------------------+--------------------------+-----------+---------------------+-----------+--------------------------------+-----------------------------+
| projectsveltos/cluster11 | helm release | kyverno | kyverno-latest | Upgrade | Current version: "3.0.9". | kyverno-disallow-latest-tag |
| | | | | | Would move to version: | |
| | | | | | "v3.1.4" | |
| projectsveltos/cluster11 | kyverno.io:ClusterPolicy | | disallow-latest-tag | No Action | Object already deployed. | kyverno-disallow-latest-tag |
| | | | | | And policy referenced by | |
| | | | | | ClusterProfile has not changed | |
| | | | | | since last deployment. | |
+--------------------------+--------------------------+-----------+---------------------+-----------+--------------------------------+-----------------------------+
From the output above, we can observe that only the Kyverno deployment will get updated from v3.0.9 to v3.1.4 and nothing will happen to the already applied Kyverno policy.
Of course, this is a simplistic example to demonstrate the Dry Run capability. However, imagine having a very large deployment with multiple components and dependencies. Sveltos Dry Run feature can be handy.
Step 4: Deploy Kyverno v3.1.4 to Cluster11
Once we are happy with the changes to be performed on the cluster, it is time to deploy them. To do so, we will have to update the syncMode
variable to Continuous
. The ClusterProfile will look like the below YAML output.
---
apiVersion: config.projectsveltos.io/v1alpha1
kind: ClusterProfile
metadata:
name: kyverno-disallow-latest-tag
spec:
syncMode: Continuous
clusterSelector: env=prod
helmCharts:
- repositoryURL: https://kyverno.github.io/kyverno/
repositoryName: kyverno
chartName: kyverno/kyverno
chartVersion: v3.1.4
releaseName: kyverno-latest
releaseNamespace: kyverno
helmChartAction: Install
policyRefs:
- kind: ConfigMap
name: disallow-latest-tag
namespace: default
$ kubectl apply -f "clusterprofile_kyverno_disallow_latest.yaml"
$ sveltosctl show usage
+----------------+--------------------+-----------------------------+--------------------------+
| RESOURCE KIND | RESOURCE NAMESPACE | RESOURCE NAME | CLUSTERS |
+----------------+--------------------+-----------------------------+--------------------------+
| ClusterProfile | | kyverno-disallow-latest-tag | projectsveltos/cluster11 |
| ConfigMap | default | disallow-latest-tag | projectsveltos/cluster11 |
+----------------+--------------------+-----------------------------+--------------------------+
$ sveltosctl show addons
+--------------------------+--------------------------+-----------+---------------------+---------+-------------------------------+-----------------------------+
| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
+--------------------------+--------------------------+-----------+---------------------+---------+-------------------------------+-----------------------------+
| projectsveltos/cluster11 | helm chart | kyverno | kyverno-latest | 3.1.4 | 2024-01-20 17:36:51 +0000 UTC | kyverno-disallow-latest-tag |
| projectsveltos/cluster11 | kyverno.io:ClusterPolicy | | disallow-latest-tag | N/A | 2024-01-20 17:36:49 +0000 UTC | kyverno-disallow-latest-tag |
+--------------------------+--------------------------+-----------+---------------------+---------+-------------------------------+-----------------------------+
Step 5: Verify the Kyverno Update
From Sveltos point of view we can clearly see that the Kyverno v3.1.4 has been deployed in the cluster. Let's confirm this is the case from a cluster11 point of view.
$ kubectl get all -n kyverno
NAME READY STATUS RESTARTS AGE
pod/kyverno-admission-controller-69c4c65769-qdg4w 1/1 Running 0 8m10s
pod/kyverno-background-controller-857c7b7b79-ngkcl 1/1 Running 0 8m10s
pod/kyverno-cleanup-admission-reports-28429540-kkrt4 0/1 Completed 0 5m7s
pod/kyverno-cleanup-cluster-admission-reports-28429540-wjz44 0/1 Completed 0 5m7s
pod/kyverno-cleanup-controller-7c9f487ccd-lrz6j 1/1 Running 0 8m10s
pod/kyverno-reports-controller-7bb7db947-f4ff5 1/1 Running 0 8m10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kyverno-background-controller-metrics ClusterIP 10.43.63.55 <none> 8000/TCP 24m
service/kyverno-cleanup-controller ClusterIP 10.43.170.79 <none> 443/TCP 24m
service/kyverno-cleanup-controller-metrics ClusterIP 10.43.228.47 <none> 8000/TCP 24m
service/kyverno-latest-svc ClusterIP 10.43.80.134 <none> 443/TCP 24m
service/kyverno-latest-svc-metrics ClusterIP 10.43.184.235 <none> 8000/TCP 24m
service/kyverno-reports-controller-metrics ClusterIP 10.43.41.181 <none> 8000/TCP 24m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kyverno-admission-controller 1/1 1 1 24m
deployment.apps/kyverno-background-controller 1/1 1 1 24m
deployment.apps/kyverno-cleanup-controller 1/1 1 1 24m
deployment.apps/kyverno-reports-controller 1/1 1 1 24m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kyverno-admission-controller-65f76b4f47 0 0 0 24m
replicaset.apps/kyverno-admission-controller-69c4c65769 1 1 1 8m10s
replicaset.apps/kyverno-background-controller-66d498dd5c 0 0 0 24m
replicaset.apps/kyverno-background-controller-857c7b7b79 1 1 1 8m10s
replicaset.apps/kyverno-cleanup-controller-7c9f487ccd 1 1 1 8m10s
replicaset.apps/kyverno-cleanup-controller-8689db777f 0 0 0 24m
replicaset.apps/kyverno-reports-controller-7bb7db947 1 1 1 8m10s
replicaset.apps/kyverno-reports-controller-84fd865d49 0 0 0 24m
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/kyverno-cleanup-admission-reports */10 * * * * False 0 5m7s 24m
cronjob.batch/kyverno-cleanup-cluster-admission-reports */10 * * * * False 0 5m7s 24m
NAME COMPLETIONS DURATION AGE
job.batch/kyverno-cleanup-admission-reports-28429540 1/1 4s 5m7s
job.batch/kyverno-cleanup-cluster-admission-reports-28429540 1/1 4s 5m7s
$ kubectl get deploy kyverno-admission-controller -n kyverno -o yaml | grep -i image
image: ghcr.io/kyverno/kyverno:v1.11.4
imagePullPolicy: IfNotPresent
image: ghcr.io/kyverno/kyvernopre:v1.11.4
imagePullPolicy: IfNotPresent
Updates just become easier and less stressful with the Dry Run capability of Sveltos. Try it out now!
👏 Support this project
Every contribution counts! If you enjoyed this article, check out the Projectsveltos GitHub repo. You can star 🌟 the project if you find it helpful.
The GitHub repo is a great resource for getting started with the project. It contains the code, documentation, and many more examples.
Thanks for reading!
Posted on January 20, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.