Automatic SSO in Kubernetes workloads using a sidecar container

gabrielbiasi

Gabriel de Biasi

Posted on December 12, 2022

Automatic SSO in Kubernetes workloads using a sidecar container

āš  This post assumes you have a pretty good knowledge of Kubernetes, Helm, and creating your helm charts by yourself. āš 

For security reasons, it is quite common the need to protect our workloads in Kubernetes with some kind of authentication, or even using basic auth, for example. However, some applications don't offer this option natively.

Image description

We have the basic structure of an application that has to be exposed to the Internet, but still without authentication.

outsiders

To achieve that, we'll need an Identity Provider and a sidecar container to handle this process for us, without the need to make modifications to our own application. But, how is this possible?? šŸ˜§

An IdP provides us the identities to be used. It might be OIDC, AD, Google Workspace, GitHub, Auth0, among others. In this example, we will use a Keycloak instance, which has an IdP configuration with Google Workspace.

The star here is oauth2-proxy. This project provides a super small Docker image (~12.8 MB) that can serve static files directly, or upstream to another web server.

This proxy is fully configurable using environment variables. We can now define, within our helm chart templates, the default settings to use. In this example, if the developer wants to enable this functionality, he can set this value in the values.yaml file as follows:



# chart/values.yaml
authProxy:
  enabled: true


Enter fullscreen mode Exit fullscreen mode

We're gonna need to change the template of the Deployment and the Service. First, let's see what the changes in the Deployment look like.



# chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "chart.fullname" . }}
  labels:
    {{- include "chart.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "chart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "chart.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: nginx
          image: nginx:1.19
          ports:
            - name: http
              containerPort: {{ .Values.containerPort }}
              protocol: TCP
          envFrom:
            - configMapRef:
                name: {{ include "chart.fullname" . }}
            - secretRef:
                name: {{ include "chart.fullname" . }}
       # ---
       # changes start HERE!
       # ---
       {{- if .Values.authProxy.enabled }}
        - name: auth
          image: quay.io/oauth2-proxy/oauth2-proxy:v7.3.0
          ports:
            - name: auth
              containerPort: 5001
              protocol: TCP
          env:
            - name: OAUTH2_PROXY_HTTP_ADDRESS
              value: ":5001"
            - name: OAUTH2_PROXY_UPSTREAMS
              value: {{ print "http://127.0.0.1:" .Values.containerPort }}
            - name: OAUTH2_PROXY_COOKIE_SECRET
              value: {{ randAlphaNum 32 | quote }}
            - name: OAUTH2_PROXY_COOKIE_NAME
              value: {{ include "chart.fullname" . }}
            - name: OAUTH2_PROXY_SKIP_PROVIDER_BUTTON
              value: "true"
            - name: OAUTH2_PROXY_AUTH_LOGGING
              value: "true"
            - name: OAUTH2_PROXY_REQUEST_LOGGING
              value: "false"
            - name: OAUTH2_PROXY_FORCE_CODE_CHALLENGE_METHOD
              value: "S256"
          envFrom:
            - configMapRef:
                name: {{ include "chart.fullname" . }}
            - secretRef:
                name: {{ include "chart.fullname" . }}
        {{- end }}


Enter fullscreen mode Exit fullscreen mode

Note that if authProxy.enabled is true, a new container is included in the Deployment, called "auth". This container already has some environment variables configured, however, some environment variables are still needed, depending on which provider we use to authenticate.

In this example, we define the environment variables using a mix of ConfigMaps and Secrets, which are referenced in configMapRef and secretRef. You can see all the environment variables available to use in the project documentation.

Now, let's see the changes that we need to do in the Service.



# chart/templates/service.yaml
{{- $targetPort := .Values.authProxy.enabled | ternary "auth" "http") }}
apiVersion: v1
kind: Service
metadata:
  name: {{ include "chart.fullname" . }}
  labels:
    {{- include "chart.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - name: http
      port: {{ .Values.service.port }}
      targetPort: {{ $targetPort | quote }}
      protocol: TCP
  selector:
    {{- include "chart.selectorLabels" . | nindent 4 }}


Enter fullscreen mode Exit fullscreen mode

In the first line, we create a new variable called $targetPort. This variable changes its value depending on the value of authProxy.enabled.

In a nutshell, we switch the value of spec.ports[0].targetPort between http or auth, depending if authProxy.enabled is true or not.

Let's see how these modifications work out together.

Image description

Notice that we don't need to make any modifications in the original container, as we are simply proxying the traffic using the sidecar container.

Now, whenever outsider users try to access our application, they are gonna see the Keycloak login page first.

Image description

you shall not pass

What do you think about this solution? šŸ¤”
Thank you for reading!
šŸ€šŸ€šŸ€

šŸ’– šŸ’Ŗ šŸ™… šŸš©
gabrielbiasi
Gabriel de Biasi

Posted on December 12, 2022

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related