Securing Kubernetes Secrets with Conjur
Sameer Kulkarni
Posted on March 26, 2021
Why to secure Kubernetes secrets?
Secrets management is one of the important aspects of securing your Kubernetes cluster. Out of the box, Kubernetes uses base 64 encoding for storing them, which is not enough. You have to implement a number of security best practices on top, to prevent possible security breaches. etcd encryption at rest, access control with RBAC, are a couple of examples of the same. Using secrets management solutions like CyberArk Conjur, not only secures them for Kubernetes, but also provides other benefits as we will see in the post.
What is Conjur?
CyberArk Conjur is a secrets manager. It helps you manage secrets in Kubernetes, as well as across applications, tools & clouds. It offers Role Based Access Control (RBAC) with an audit trail to easily track each stored secret. It implements encryption at rest with AES-256-GCM and in transit using mTLS. Additionally, you can manage the access for each secret & can also rotate the secrets automatically.
In this post, we will see how to install Conjur OSS on Kubernetes. We will go through a basic set of Conjur policies and will load them into Conjur. We’ll also see how to run an application in Kubernetes which uses secrets from Conjur by conforming to the defined policies.
Pre-requisites
- Familiarity with advanced YAML concepts.
You may be already familiar with the way Kubernetes spec files are written in YAML. Although you also need to understand a few more YAML concepts to understand & define Conjur policies, viz. tags, anchors & aliases. Conjur website has a quick refresher on this. Alternatively, you can go through the full YAML documentation.
A working Kubernetes cluster
Docker installed locally
Setup
How to Install Conjur?
The easiest way to Install Conjur on a Kubernetes cluster is by using the Helm chart. Let's first create a custom values file for the Helm chart.
$ DATA_KEY="$(docker run --rm cyberark/conjur data-key generate)"
$ HELM_RELEASE_NAME=conjur-oss
$ cat >values.yaml <<EOT
account:
name: "default"
create: true
authenticators: "authn-k8s/namespace,authn-k8s/deployment,authn-k8s/service_account,authn-k8s/demo,authn"
dataKey: $DATA_KEY
ssl:
altNames:
- $HELM_RELEASE_NAME
- $HELM_RELEASE_NAME-ingress
- $HELM_RELEASE_NAME.conjur.svc.cluster.local
- $HELM_RELEASE_NAME-ingress.conjur.svc.cluster.local
service:
external:
enabled: false
replicaCount: 1
EOT
The dataKey
is used for encrypting the secrets in the db. The ssl.altNames
will be used for the SSL configuration of the Conjur service that the Helm chart will create.
Install Conjur OSS on a Kubernetes cluster, with the following commands.
$ CONJUR_NAMESPACE=conjur
$ kubectl create namespace "$CONJUR_NAMESPACE"
$ VERSION=2.0.3
$ helm repo update
$ helm install \
-n "$CONJUR_NAMESPACE" \
-f values.yaml \
"$HELM_RELEASE_NAME" \
https://github.com/cyberark/conjur-oss-helm-chart/releases/download/v$VERSION/conjur-oss-$VERSION.tgz
The VERSION
declared above is the Conjur Helm chart release version. As of writing this post, the latest Conjur OSS Helm chart version is 2.0.3
. Refer Conjur Helm chart releases for the latest Conjur Helm chart available.
Once the helm chart is installed, it creates an admin user. You will need this key for the initial load of the Conjur policies, secrets, etc. You'll also need it in the "break-glass" scenarios. Hence you need to store it in a safe place. You can fetch the same using the commands below.
$ POD_NAME=$(kubectl get pods --namespace $CONJUR_NAMESPACE \
-l "app=$HELM_RELEASE_NAME,release=$HELM_RELEASE_NAME" \
-o jsonpath="{.items[0].metadata.name}")
$ kubectl exec --namespace $CONJUR_NAMESPACE \
$POD_NAME \
--container=$HELM_RELEASE_NAME \
-- conjurctl role retrieve-key default:user:admin | tail -1
Verify the installation.
$ kubectl get po,svc -n $CONJUR_NAMESPACE
NAME READY STATUS RESTARTS AGE
pod/conjur-oss-b888db5d5-vmfl5 1/2 Running 0 77s
pod/conjur-oss-postgres-0 1/1 Running 0 77s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/conjur-oss NodePort 10.68.34.148 <none> 443:31022/TCP 79s
service/conjur-oss-postgres ClusterIP 10.68.36.72 <none> 5432/TCP 79s
Define Conjur Policies
Conjur policies help define objects in its database in a tree structure. Some examples of the objects defined in the policies are users, roles, secrets & applications. It also defines rules the for role based access control. While the Conjur documentation defines the policy best practices, we will use one of the Conjur demo repositories to define policies. I've used policies in the demo repository as the base and have further simplified them to understand the basic concepts better. Download and review the simplified policy files from my repository. Note that all the policies need to have a .yml
extension.
-
1_users.yml
file defines users and roles. It also grants role based access to a group of users as well as to individual users. -
2_app_authn-def.yml
file defines applications & groups them into a layer for easier access management. -
3_cluster-authn-svc-def.yml
file defines the authenticator service & the SSL certificates for the mTLS communication between Conjur & its clients. In this case, Conjur clients are applications running on Kubernetes. It also defines role based access to authenticate with the service. -
4_app-identity-def.yml
file connects the authentication identities to application identities. -
5_authn-any-policy-branch.yml
policy is defined to verify that hosts can authenticate with Conjur from anywhere in the policy branch to retrieve secrets for Kubernetes. -
6_app-access.yml
defines secret variables for different applications that will use Conjur as its secrets manager. Note that the variables mentioned here are just secret variable names & not the values.
Generate mTLS cert & key
Run the create_mtls_certs.sh shell script to create the mTLS cert & key. Make sure to update the AUTHENTICATOR_ID
& CONJUR_ACCOUNT
in the script with the values appropriate for the Conjur installation. AUTHENTICATOR_ID
is the part that follows conjur/authn-k8s/
in the id
of the 3_cluster-authn-svc-def.yml
policy file.
Load policies & secrets
We'll use conjur-cli
to load policies & data in Conjur. Conjur has a container image pre-packaged with the Conjur cli. We will run the Conjur client as a pod on the cluster. The policies and certificates will get mounted on the same as configmap volumes. conjur-cli
will load the policies & certificates to the Conjur server from these volumes.
Have all the policy files under the policy
directory and the mTLS cert-key pair in mtls
directory to create configmaps out of it.
$ # Create configmap containing the mTLS cert & key
$ kubectl create configmap conjur-ca -n $CONJUR_NAMESPACE --from-file $(pwd)/mtls
$ # Create a configmap containing all the policy files
$ kubectl create configmap policies -n $CONJUR_NAMESPACE --from-file $(pwd)/policy
Run the Conjur client pod with the above configmaps mounted as volumes. Download the sample pod config from here. Create the pod with downloaded config & exec into it to load values.
$ kubectl create -f conjur-client.yaml
$ kubectl exec -it -n $CONJUR_NAMESPACE conjur-client -- sh
Connect to the Conjur server & authenticate as an admin user
$ export CONJUR_URL=https://conjur-oss
$ export ACCOUNT=default
$ conjur init -u $CONJUR_URL -a $ACCOUNT
$ conjur authn login -u admin -p <admin_api_key_printed_by_helm_install>
We can start loading policies now. Loading the policy files 1_users.yml
and 2_app-authn-def.yml
in Conjur generates API keys for the users & hosts defined in it. The user API keys can be distributed to respective team members, allowing them to authenticate & interact with Conjur. We will use the Kubernetes authenticator instead of host API keys to get the application authenticated with Conjur.
$ conjur policy load root policy/1_users.yml
$ conjur policy load root policy/2_app-authn-def.yml
$ conjur policy load root policy/3_cluster-authn-svc-def.yml
$ conjur policy load root policy/4_app-identity-def.yml
$ conjur policy load root policy/5_authn-any-policy-branch.yml
$ conjur policy load root policy/6_app-access.yml
Load mTLS certificate & key.
$ conjur variable values add conjur/authn-k8s/demo/ca/cert "$(cat conjur-ca/ca.cert)"
$ conjur variable values add conjur/authn-k8s/demo/ca/key "$(cat conjur-ca/ca.key)"
Load secret values.
$ conjur variable values add demo-app-vars/url "https://my.app.com"
$ conjur variable values add demo-app-vars/username "myuser"
$ conjur variable values add demo-app-vars/password "supersecret"
You don’t want the client pod to continue running in the cluster, especially because it’s currently logged in to the Conjur server. Hence either log out from the Conjur server with conjur authn logout
or delete the conjur-client
pod. Also, delete the configmaps mounted on the client pod.
$ kubectl delete -f conjur-client.yaml
$ kubectl delete cm conjur-ca policies
Configure & run application
Conjur offers various authenticators for users and hosts. Here we will use the Kubernetes authenticator to get our application host authenticated with the Conjur server. Kubernetes authenticator uses Kubernetes APIs to authenticate resources like Pod, Deployment, etc. Refer Conjur documentation to see the full list of supported Kubernetes resources that can be defined as hosts.
Kubernetes Authenticator Client is one of the two options for using this authentication method. You can run it as initContainer
or as sidecar
for each application. Configure the Conjur url, account, login, etc as the env variables on the application and the authenticator container. Login is the full host id as defined in the 2_app-authn-def.yml
policy file. You also need to mount the SSL certs for the Conjur service. In this case, we have to use the SSL certificates generated by Helm chart during Conjur installation. If you have your own SSL certificates configured on the Conjur server, you can use the same. Note that the value of CONJUR_AUTHN_URL
on the authenticator container is slightly different from the CONJUR_APPLIANCE_URL
on the application container. CONJUR_AUTHN_URL
has the authetication service id appended to the CONJUR_APPLIANCE_URL
.
Authentication client authenticates itself to the configured Conjur server url & provides it's identity, i.e., the login id, pod name, namespace, etc. Conjur validates the information provided by the authenticator client against defined policies, as well as, Kubernetes & provides an access token. The access token is valid only for 8 mins. Client container saves the authentication token on an in-memory volume, which is mounted to both the containers viz. application & authenticator.
Summon which is a separate open source utility from CyberArk Conjur, uses this token to fetch the values for secrets. Your application container needs to run Summon
as its main process with path to a secrets.yml
file that lists all the secret values that it needs to pull from Conjur. Summon runs the application executable as a sub-process & passes the secret values it fetched as env variables. See the command configured on the Dockerfile of the application we're about to run.
$ tail -n1 Dockerfile
ENTRYPOINT ["summon", "--provider", "summon-conjur", "-f", "/etc/secrets.yml", "/bin/sh", "-c", "while true; do printenv | grep PASSWORD; sleep 5; done"]
The !var
in the secrets.yml file indicates that the value needs to be injected as env variable. It can be replaced with !file:var
in case you want the value to be written to a file. In that case, the env variable name on the left side will contain the file's path where secret content is written. Observe the contents of our secrets.yml
below.
$ cat secrets.yml
PASSWORD: !var demo-app-vars/password
Before running the application, let's first create our application namespace & copy the secret containing Conjur tls certs to our application namespace.
$ kubectl create namespace test
$ kubectl get secret conjur-oss-conjur-ssl-cert --namespace=conjur -oyaml |\
sed 's/namespace: conjur/namespace: test/g' |\
kubectl apply -f -
Use the example application deployment file to run the application. As you saw in the Dockerfile
, my example application is a busybox container. It just prints the value of secret demo-app-vars/password
from Conjur every 5 seconds. A typical application should never print secret values in logs; but we'll use it only to demonstrate that the value is available to the application from Conjur. Let's run the same & observe the logs.
$ kubectl create -f busybox.yaml
Check to see if the application has the secret value from Conjur available as an environment variable.
$ kubectl logs -f -lapp=busybox
pass: supersecret
pass: supersecret
pass: supersecret
pass: supersecret
An additional security feature you get by using Conjur & Summon is that the secret value is only available to the application & not to the entire container. This means, if an attacker were to get access inside the application container, they won't be able to access the secret values by listing the environment variables in the container.
$ kubectl exec -it -lapp=busybox -- sh
$ printenv | grep PASSWORD
$
$ # No output above
Cleanup
Cleanup all the resources created in this post with the below commands:
$ kubectl delete -f busybox.yaml
$ helm delete $HELM_RELEASE_NAME -n $CONJUR_NAMESPACE
$
$ ## Delete client pod & configmap, if not already removed
$ kubectl delete -f conjur-client.yaml
$ kubectl delete cm -n $CONJUR_NAMESPACE conjur-ca policies
Conclusion
In this post, we looked at what Conjur is, its uses & basic concepts. We also installed Conjur on a Kubernetes cluster and integrated it with a sample application running in Kubernetes. Hope this gives you a good start for using Conjur with Kubernetes.
We’re always thrilled to connect to people working with cloud native technologies. For any queries or comments, you can reach out to us via Twitter and LinkedIn.
References
conjur.org
Conjur OSS Helm Chart
Kubernetes Conjur Demo
My GitHub repo with all the resources used in the post
Posted on March 26, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.