Helm: reusable chart — named templates, and a generic chart for multiple applications
Arseny Zinchenko
Posted on October 20, 2020
Helm: reusable chart — named templates, and a generic chart for multiple applications
Our project is growing, and more and more applications are started on the AWS Elastic Kubernetes Service.
Finally, we’ve faced with the question already mentioned in the Helm: пошаговое создание чарта и деплоймента из Jenkins (Rus) — what to do with Kubernetes manifests and Helm templates when using a lot of similar applications?
Especially now, when we have the same code deployed from the same repository but three similar applications.
At first, we had an only one, then was added another one, and now we are preparing to release the third one.
As result, their chart now looks like the next:
$ tree -d k8s/
k8s/
├── app1-chart
│ ├── charts
│ ├── env
│ │ ├── dev
│ │ ├── dev-2
│ │ └── prod
│ └── templates
└── app2-chart
├── charts
├── env
│ ├── dev
│ ├── prod
│ └── stage
└── templates
So, now I need to copy the same chart again?
Don’t like that idea at all, so let’s take a deeper view of the Helm templates and will try to create one chart which can be used to deploy three similar applications.
Contents
- Deployment — the file’s structure
- Helm Named templates
- Common labels
- define
- include
- indent vs nindent
- Templates inside of a template
- The env block from a project’s values.yaml file
- The rangeloop
- The range loop with variables
- toYaml
- Whitespaces
- Helm tpl
- The env block from the _helpers.tpl
- if/else — flow control
- The env block from an external file
- Excluding a template from the chart
Deployment — the file’s structure
To think about future generic template’s structure let’s check which blocks will be common and shared among all applications and which will be different and will be included to the template.
Here, we will speak mostly about the deployment.yaml
file and others will be done similar to it.
So:
- the
deployment.yaml
file: -
metadata
: -
name
: common, a value will be taken from the values.yaml of a specific project -
annotations
: common and we have the onlyreloader.stakater.com/auto
here and it always will be set to true (still, check the Kubernetes: ConfigMap and Secrets – data auto-reload in pods post about other available options) -
labels
: common and will be used in different places, for example in the same file below in thespec.template.metadata.labels
and in the template with the Kubernetes Cronjobs, so it will be a good idea to move labels into the_helpers.yaml
file -
spec
: -
replicas
: common, a value will be taken from thevalues.yaml
of a specific project -
strategy
: -
type
: common, a value will be taken from thevalues.yaml
of a specific project -
selector
: -
matchLabels
: also common and also will be used in different places, let's move it into the_helpers.yaml
-
template
: -
metadata
: -
labels
: will be taken from the_helpers.yaml
-
spec
: -
containers
: -
name
: common, a value will be taken from thevalues.yaml
of a specific project -
image
: common, a value will be taken from thevalues.yaml
of a specific project -
env
: the most interesting part of this post:
- we can move it into the
values.yaml
of each project - pros: no need to specify values as they can be set directly here in the
value
fields of the variables - cons:
- our variables are almost identical among all the projects — so they will be duplicated
- the
values.yaml
will be bloated as we have a lot of variables to be added - not all variables are in the KEY:VALUE view — some will take their values from a
valueFrom.secretKeyRef.<KUBE_SECRET_NAME>
, thus will be unable to set its values directly in the values.yaml - another way is to use the
_helpers.yaml
file where we will have a set of common variables, each with its ownif/then
- if thevalues.yaml
of a project will not contain a value for a variable then such a variable will not be added into the generic template and thus will not break a project's deploy by adding a new variable - and for a case when a project will have to have its own set of variables — we can add a condition before the env block by checking something like if
{{ .Values.chartSettings.customEnvs == true }}
and will skip adding the block from the_helpers.yaml
orvalues.yaml
and will use a dedicated file instead with something liketpl .File.Get project1/envs.yaml
-
volumeMounts
: also can differ, will think about it later -
ports
: -
containerPort
: will be always, will take its value from thevalues.yaml
(btw, it's not necessary to add them here at all, see the Should I configure the ports in the Kubernetes deployment? post) -
livenessProbe
,readinessProbe
: will be common with thehttpGet.path
andhttpGet.port
- so can be moved into the_helpers.yaml
but also add an option to include it via an external file with the.File.Get
-
resources
: -
requests.cpu
,requests.memory
- well... I guess will be added for all, but still, it's a question - need to think about it later check the Kubernetes: Evicted pods and Pods Quality of Service post for more details -
limits.cpu
,limits.memory
- the memory will be limited for all... or not... will think later -
volumes
: also can differ, leave it for later -
imagePullSecrets
: similar for all -
hpa.yaml
- for the HPA -
network.yaml
- Service, Ingress -
secrets.yaml
- Kubernetes Secrets -
rbac.yaml
- attaching user groups into a c namespace created by Helm during deployment -
cronjobs.yaml
- Kubernetes Cronjobs -
_helpers.tpl
- and our "helper" for the Helm Named Templates
So, basically, the main idea is the following:
- a general chart with templates in the templates directory
- in the templates will keep the
deployment.yaml
,hpa.yaml
,_helpers.tpl
, etc - near the templates will create an additional directory called
projects
- and inside it — catalogs by projects manes
- project1
- project2
- project3
- and inside each of them will have an
env
directory - with
dev
,stage
,prod
directories - with dedicated
values.yaml
andsecrets.yaml
files
Helm Named templates
Good post about named templates — How To Reduce Helm Chart Boilerplate With Named Templates (more links at the end of this post).
The official documentation is here>>>.
The very first block in our template which we’d like to move into the _helpers.yaml
is labels
.
The general idea behind the Named Templates in Helm is to write some code only once and then include it in every placed where we need for this code. Also, this helps to reduce a template’s content and make it more readable.
The named templates file name is stared with the underline with the tpl
extension.
The most known file is the _helpers.tpl
which we will use in this case.
Each template definition in this file is started with the define
keyword and ended with the end
.
A name for a named template usually includes a chart name and a block which is described the template, but you can use any other. For example, in our file, we will use the helpers. name.
The one inconvenience here is the fact that you can’t use values like .Chart.Name
, but you can create a Helm's variable inside. Not sure about this, will see later, for now, will just hardcode the helpers.
Common labels
Well, which labels will be good to have at all?
- environment — Dev, Stage, Prod — will be set from the values.yaml
-
appversion — will be set during a build in Jenkins via the
--set
option
For more good ideas we can ask Google — “kubernetes recommended labels”, and the very first link is the Recommended Labels.
Also, check the Labels and Annotations in the Helm’s documentation
Plus, it may be a good idea to take a look at some existing charts, for example, the sonarqube/templates/deployment.yaml.
So, for now, we can add four custom labels:
-
application:
{{ .Values.appConfig.appName }}
, will be set in the values.yaml for each application -
version:
{{ .Chart.Version }}
-{{ .Chart.AppVersion }}
, will be set from Jenkins -
environment:
{{ .Values.appConfig.appEnv }}
, will be set in thevalues.yaml
for each application -
managed-by:
{{ .Release.Service }}
by Helm
And if we will want to add another one — we can do it in one place instead of updating all the places where labels are used.
define
Go back to our _helpers.tpl
and describe our labels:
{{- define "helpers.labels" -}}
application: {{ .Values.appConfig.appName }}
version: {{ .Chart.Version }}-{{ .Chart.AppVersion }}
environment: {{ .Values.appConfig.appEnv }}
managed-by: {{ .Release.Service }}
{{- end }}
include
Then, using the include, specify where it must be included in the general deployment's template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appConfig.appName }}-deployment
labels: {{- include "helpers.labels" . | nindent 4 }}
...
indent
vs nindent
-
indent
- just will add spaces -
nindent
- will add spaces plus a new line symbol
I.e. instead of writing something like this:
...
labels:
{{- include "helpers.labels" . | indent 4 }}
...
We can set it in one line and avoid typing whitespaces in the template:
...
labels: {{- include "helpers.labels" . | nindent 4 }}
...
Also, this plays a role in adding a YAML block into the template, will see it in action below.
Going back to the named templates — let’s do the same for the spec.selector.matchLabels
- move it to a dedicated template so we can re-use it later, for example in Services.
In the _helpers.tpl
add a new block:
...
{{- define "helpers.selectorLabels" -}}
application: {{ .Values.appConfig.appName }}
{{- end }}
For now, we do a selection by an application name only, and later we can update it and include any other selectors.
Add it to the deployment:
...
selector:
matchLabels:
{{- include "helpers.selectorLabels" . | nindent 6 }}
template:
...
Templates inside of a template
And we came to the most interesting part of the task — adding whole blocks into the template’s file.
So, in the currently used template, we have the env
block:
...
env:
- name: APP_SECRET
valueFrom:
secretKeyRef:
name: app-backend-secrets
key: backend-app-secret
- name: AUTH_SECRET
valueFrom:
secretKeyRef:
name: app-backend-secrets
key: backend-auth-secret
- name: CLIENT_HOST
value: {{ .Values.appConfig.clientHost }}
- name: DATABASE_HOST
value: {{ .Values.appConfig.db.host }}
- name: DATABASE_SLAVE_HOST
value: {{ .Values.appConfig.db.slave }}
- name: DB_USERNAME
value: {{ .Values.appConfig.db.user }}
- name: INSTANA_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
...
Which we’d like to move to a block in another file and include it in the new template which we are writing now.
The env
block from a project's values.yaml
file
Let’s see how we can implement this.
At first, need to add environment variables to the values.yaml
.
Create the directories tree:
$ mkdir -p projects/newapp/env/dev
And files inside — values.yaml
and secrets.yaml
:
$ touch projects/newapp/env/dev/{values.yaml,secrets.yaml}
Then, in the projects/newapp/env/dev/values.yaml
create a list called environments with the only one variable for now:
environments:
- name: 'DATABASE_HOST'
value: 'dev.aurora.web.example.com'
To be able to test the template with the helm --debug --dry-run
- add all other data like {{ .Values.deployment.replicaCount }}
.
The range
loop
Documentation — Flow Control.
Good examples — Helm Template range.
So, we have an environment variable DATABASE_HOST
with the dev.aurora.web.example.com value which is described in the values.yaml
which is set as an element of the list with two key:value pairs:
...
<LIST_NAME>:
- <VAR_NAME>: <VAR_VALUE>
<VAR_NAME>: <VAR_VALUE>
...
Which we can then include in the general template with the following code:
...
spec:
containers:
- name: {{ .Values.appConfig.appName }}-pod
image: {{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}
env:
{{- range .Values.deployment.environments }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
...
Here:
- iterate the
environments
list with the key:value lines - iterate each of the element of the list looking for the VAR_NAME
.name
- this will be set in the name:{{ .name }}
- iterate each of the element of the list looking for the VAR_NAME
.value
- this will be set in the value:{{ .value }}
Check it:
helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run .
…
spec:
replicas: 2
strategy:
type:
selector:
matchLabels:
application: newapp
template:
metadata:
labels:
application: newapp
version: 0.1.0–1.16.0
environment: dev
managed-by: Helm
spec:
containers:
- name: newapp-pod
image: projectname/projectname:latest
env:
- name: DATABASE_HOST
value: dev.aurora.web.example.com
ports:
- containerPort: 3001
…
By the way — we are seeing our labels here.
And here is our environment variables:
…
env:
- name: DATABASE_HOST
value: dev.aurora.web.example.com
…
The range
loop with variables
Another solution is to use the same loop but now using $key
and $value
variables - in this case, we are not tied to names of the variables in the values.yaml
file, instead just iterate over each of them:
...
{{- range $key, $value := .Values.deployment.environments }}
env:
- name: {{ $key }}
value: {{ $value }}
{{- end }}
...
Update the values.yaml
- now our environments are set as a dictionary containing our variables just as a key:value pairs, here is added another one to make the example more illustrative, the DB_USERNAME
one:
...
environments:
DATABASE_HOST: 'dev.aurora.web.example.com'
DB_USERNAME: 'dbuser'
...
Check it:
$ helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run .
…
spec:
containers:
- name: newapp-pod
image: projectname/projectname:latest
env:
- name: DATABASE_HOST
value: dev.aurora.web.example.com
- name: DB_USERNAME
value: dbuser
ports:
- containerPort: 3001
…
toYaml
But let’s recall our initial template:
...
- name: DATABASE_HOST
value: {{ .Values.backendConfig.db.host }}
- name: DB_USERNAME
value: {{ .Values.backendConfig.db.user }}
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
...
We can’t use a simple loop here, so the third solution will be to describe the whole env block in the values.yaml
, and then include it in the general template by using the toYaml
.
Update the values.yaml
and set the block here:
environments:
- name: DATABASE_HOST
value: 'dev.aurora.web.example.com'
- name: DB_USERNAME
value: 'dbuser'
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
And then include it to the general template to the env
block:
...
containers:
- name: {{ .Values.appConfig.appName }}-pod
image: {{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}
env:
{{- toYaml .Values.deployment.environments | nindent 8 }}
...
Whitespaces
Pay your attention that before the toYaml and after the {{
we've set the dash symbol "-
":
{{- toYaml .Values.deployment.environments | nindent 8 }}
It is used to remove whitespaces before the block to be included, also keep in mind that new lines also are threaded as whitespaces.
If remove the “-
" from here we will get an extra newline in the resulted manifest:
…
env:
- name: DATABASE_HOST
value: dev.aurora.web.example.com
- name: DB_USERNAME
…
See the Controlling Whitespace on the Flow Control and more example on the Directives and Whitespace Handling pages.
And here we are again using the nindent
to add newlines after each line from the .Values.deployment.environments
.
Helm tpl
And one more solution — use the tpl
function.
In contrast to the previous one — this time we can use directives like{{ .Release.Name }}
in our values.yaml
as the code from it will be treated as a part of the template itself.
Update the values.yaml
:
environments: |-
- name: RELEASE_NAME
value: {{ .Release.Name }}
- name: DATABASE_HOST
value: 'dev.aurora.web.example.com'
- name: DB_USERNAME
value: 'dbuser'
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
Pay attention to the “|-
" - we are describing the block with variables as a simple string and removing newlines.
Add it to the deployment’s template:
...
containers:
- name: {{ .Values.appConfig.appName }}-pod
image: {{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}
env:
{{- tpl .Values.deployment.environments . | nindent 8 }}
...
Here we are passing content of the .Values.deployment.environments
to the tml
function to include it to the template.
Check it:
$ helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run .
…
spec:
containers:
- name: newapp-pod
image: projectname/projectname:latest
env:
- name: RELEASE_NAME
value: eks-dev-1-newapp-backend
- name: DATABASE_HOST
value: ‘dev.aurora.web.example.com’
- name: DB_USERNAME
value: ‘dbuser’
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
ports:
- containerPort: 3001
…
The env block from the _helpers.tpl
We already used the file, but this time let’s add a condition to check if we have some parameter set in the values.yaml
.
if/else
— flow control
In the _helpers.yaml
define the env
block's template:
{{- define "helpers.environments" -}}
- name: RELEASE_NAME
value: {{ .Release.Name }}
{{- if .Values.appConfig.db.host }}
- name: DATABASE_HOST
value: 'dev.aurora.web.example.com'
{{- end }}
{{- if .Values.appConfig.db.user }}
- name: DB_USERNAME
value: 'dbuser'
{{- end }}
{{- if .Values.appConfig.db.password }}
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
{{- end }}
{{- end }}
Here we are using the if/else
condition for each environment variable to check if we have a value specified for this variable and it will be found - then the variable will be added to the generic template.
Using this we will able to expand the environment variables list in the helpers.environments
without worrying that some projet's values.yaml
will have no value for it and this will break its deployment.
Now, include it to the template:
...
containers:
- name: {{ .Values.appConfig.appName }}-pod
image: {{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}
env: {{- include "helpers.environments" . | nindent 8 }}
...
Check it:
$ helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run .
…
spec:
containers:
- name: newapp-pod
image: projectname/projectname:latest
env:
- name: RELEASE_NAME
value: eks-dev-1-newapp-backend
- name: DATABASE_HOST
value: ‘dev.aurora.web.example.com’
- name: DB_USERNAME
value: ‘dbuser’
ports:
- containerPort: 3001
…
And pay attention here — the DB_PASSWORD
was not set, because we have the {{ .Release.Name }}
by default, we have the .Values.appConfig.db.host
and .Values.appConfig.db.user
in the values.yaml
, but the .Values.appConfig.db.password
is set with the Helm Secrets and is stored in the secrets.yaml
file which we are not using now at all.
So, the if/else
condition playing here, and Helm didn't create the DB_PASSWORD
variable.
The env
block from an external file
And the latest solution which I was able to find is to include the env from a file in a project's directory.
Let’s add a new option to the values.yaml
to disable include from the _helpers.tpl
, call it customEnvs
, and add another one to specify a path to a file with environment variables - customEnvsFile
:
...
deployment:
customEnvs: true
customEnvsFile: 'projects/newapp/templates/environments.yaml'
....
Now, into the deployment template add a condition to check the customEnvs
is will be set to false - then via {{ else }}
we will include our previous template from the _helpers.yaml
:
...
env:
{{- if .Values.deployment.customEnvs }}
{{- .Files.Get .Values.deployment.customEnvsFile | nindent 8 }}
{{- else }}
{{- include "helpers.environments" . | nindent 8 }}
{{ end -}}
ports:
...
And this will work but only because I’ve removed the .Release.Name
from the projects/newapp/templates/environments.yaml
:
- name: DATABASE_HOST
value: 'dev.aurora.web.example.com'
- name: DB_USERNAME
value: 'dbuser'
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
This can be solved with already familiar tpl
function.
Go back to the projects/newapp/templates/environments.yaml
- set the .Release.Name
back:
- name: RELEASE_NAME
value: {{ .Release.Name }}
- name: DATABASE_HOST
value: 'dev.aurora.web.example.com'
...
In the deployment template, add the tpl
, and .Files.Get
and its argument wrap into parentheses:
...
{{- if .Values.deployment.customEnvs }}
{{- tpl ( .Files.Get .Values.deployment.customEnvsFile ) . | nindent 8 }}
{{- else }}
{{- include "helpers.environments" . | nindent 8 }}
{{ end -}}
...
Check it:
$ helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run .
…
spec:
containers:
- name: newapp-pod
image: projectname/projectname:latest
env:
- name: RELEASE_NAME
value: eks-dev-1-newapp-backend
- name: DATABASE_HOST
value: ‘dev.aurora.web.example.com’
- name: DB_USERNAME
value: ‘dbuser’
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: projectname-backend-secrets
key: backend-db-password
ports:
- containerPort: 3001
…
With volumes
, volumesMounts
we can use the same approach.
In general, the solution with the _helpers.yaml
looks usable - will test it on the new project and will see how it's going.
The whole Deployment template now:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appConfig.appName }}-deployment
labels: {{- include "helpers.labels" . | nindent 4 }}
annotations:
{{- if .Values.deployment.deploymentAnnotations }}
{{- toYaml .Values.deployment.deploymentAnnotations | nindent 6 }}
{{- end }}
reloader.stakater.com/auto: "true"
spec:
replicas: {{ .Values.deployment.replicaCount }}
strategy:
type: {{ .Values.deployment.delpoyStrategy }}
selector:
matchLabels:
{{- include "helpers.selectorLabels" . | nindent 6 }}
template:
metadata:
labels: {{- include "helpers.labels" . | nindent 8 }}
spec:
containers:
- name: {{ .Values.appConfig.appName }}-pod
image: {{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}
env:
{{- if .Values.deployment.customEnvs }}
{{- tpl ( .Files.Get .Values.deployment.customEnvsFile ) . | nindent 8 }}
{{- else }}
{{- include "helpers.environments" . | nindent 8 }}
{{ end -}}
ports:
- containerPort: {{ .Values.appConfig.port }}
{{- with .Values.deployment.livenessProbe }}
livenessProbe:
httpGet:
path: {{ .path }}
port: {{ .port }}
initialDelaySeconds: {{ .initDelay }}
{{- end }}
{{- with .Values.deployment.readinessProbe }}
readinessProbe:
httpGet:
path: {{ .path }}
port: {{ .port }}
initialDelaySeconds: {{ .initDelay }}
{{- end }}
resources:
requests:
cpu: {{ .Values.deployment.resources.requests.cpu | quote }}
imagePullSecrets:
- name: bttrm-docker-secret
Excluding a template from the chart
And the final thought — how can we exclude a template file from the chart at all.
For example, let’s check the cronjobs.yaml
file:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Values.appConfig.appName }}-cron
labels: {{- include "helpers.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.cronjobs.schedule | quote }}
startingDeadlineSeconds: {{ .Values.cronjobs.startingDeadline }}
concurrencyPolicy: {{ .Values.cronjobs.concurrencyPolicy }}
jobTemplate:
spec:
template:
metadata:
labels: {{- include "helpers.labels" . | nindent 12 }}
spec:
containers:
- name: {{ .Values.appConfig.appName }}-cron
image: {{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}
env:
{{- if .Values.deployment.customEnvs }}
{{- tpl ( .Files.Get .Values.deployment.customEnvsFile ) . | nindent 14 }}
{{- else }}
{{- include "helpers.environments" . | nindent 14 }}
{{ end -}}
command: ["npm"]
args: ["run", "cron:app"]
restartPolicy: Never
imagePullSecrets:
- name: bttrm-docker-secret
By the way — here are our labels and the env
block used from the _helpers.yaml
.
But not every project have cronjobs.
So, we can add a new parameter to the values.yaml
- cronjobs.enabled
:
...
################
### Cronjobs ###
################
cronjobs:
enabled: false
schedule: '*/15 * * * *'
startingDeadline: 10
concurrencyPolicy: 'Forbid'
...
And then wrap the whole template’s content in one if/else
condition check:
{{- if .Values.cronjobs.enabled }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Values.appConfig.appName }}-cron
labels: {{- include "helpers.labels" . | nindent 4 }}
...
imagePullSecrets:
- name: bttrm-docker-secret
{{- end }}
Check it:
$ helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run .
…
— -
Source: project-backend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: newapp-deployment
…
imagePullSecrets:
- name: bttrm-docker-secret
No cronjobs were added.
Add --set cronjobs.enabled=true
- and they will be added to the release:
$ helm upgrade — install eks-dev-1-newapp-backend — namespace eks-dev-1-newapp-backend-ns — create-namespace -f projects/newapp/env/dev/values.yaml — debug — dry-run . — set cronjobs.enabled=true
…
imagePullSecrets:
- name: bttrm-docker-secret
— -
Source: project-backend/templates/cronjobs.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: newapp-cron
…
imagePullSecrets:
- name: bttrm-docker-secret
“That’s all folks!” ©
Useful links
- The Art of the Helm Chart: Patterns from the Official Kubernetes Charts
- Writing Reusable Helm Charts
- How To Reduce Helm Chart Boilerplate With Named Templates
- Helm from basics to advanced
- Создание пакетов для Kubernetes с Helm: структура чарта и шаблонизация
- Helm Templates Cheat Sheet
- Introduction to Helm — Templates
- Helm Templates
- Helm Tricks: Input Validation With ‘Required’ And ‘Fail’
- Helm Chart and Template Basics — Part 1
Originally published at RTFM: Linux, DevOps и системное администрирование.
Posted on October 20, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
October 20, 2020