Thanks to k3d, you can run a Kubernetes cluster in your laptop by just using docker.
In this tutorial we use a simple Go application that just listens to incoming HTTP requests on port 8001
, returning the content of the file static/hello.html
.
This file uses three Go template variables:
{{.Start}}
{{.Username}}
{{.End}}
The actual value of those template variables can be set with the corresponding environment variables:
MYAPP_START
, default valuemystart
MYAPP_USERNAME
, default valuemyuser
MYAPP_END
, default valuemyend
You can combine those variables at your will to generate the message you want, by just using them in hello.html
, whose default content is:
default --> {{.Start}} {{.Username}} {{.End}}
So, by default, this is the message generated by the application:
default --> mystart myuser myend
First of all build the docker image: open a prompt in the goapp
directory and run the following command:
$> docker build -t gjuljo/myapp .
Of course you are free to change the image name, but please remember to update all the subsequent commands below.
Go back to the root directory of the project and test the docker image:
$> docker run -it --rm -p 8001:8001 gjuljo/myapp
$> curl -w "\n" localhost:8001
default --> mystart myuser myend
Stop the running image and rerun it by providing values for the environment variables:
$> docker run -it --rm -p 8001:8001 -e MYAPP_START=Hello -e MYAPP_USERNAME=Giulio -e MYAPP_END=", how are you?" gjuljo/myapp
$> curl -w "\n" localhost:8001
default --> Hello Giulio how are you?
Stop the running image and, in addition to the custom environment variables, replace the hello.html file with an external volume:
On Windows:
$> docker run -it --rm -p 8001:8001 -v %CD%/hello1/:/app/static/ -e MYAPP_START=Hello -e MYAPP_USERNAME=Giulio -e MYAPP_END="how are you?" gjuljo/myapp
On Linux
$> docker run -it --rm -p 8001:8001 -v $PWD/hello1/:/app/static/ -e MYAPP_START=Hello -e MYAPP_USERNAME=Giulio -e MYAPP_END="how are you?" gjuljo/myapp
$> curl -w "\n" localhost:8001
hello1 --> Hello Giulio how are you?
Refer to the k3d documentation to install k3d. To better handle and download Docker images in the cluster, we create a local registry (i.e. the container registry.local
) that you access from your host using a local hostname (registry.lvh.me
).
- Create a volume to host the registry:
$> docker volume create local_registry
- Create a container running the registry image:
$> docker container run -d --name registry.local -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
- Tag your image and publish it to the local registry. You should do it every time you change the image contents:
$> docker tag gjuljo/myapp:latest registry.lvh.me:5000/gjuljo/myapp:latest
$> docker push registry.lvh.me:5000/gjuljo/myapp:latest
- Create the
registries.yaml
file in your filesystem (ATTENTION: if you are running WSL1, create this file in the Windows filesystem, i.e.C:\Work\k3d\registry
):
mirrors:
"registry.lvh.me:5000":
endpoint:
- http://registry.local:5000
Please notice that k3d maps the registry label registry.lvh.me:5000
to the hostname of the container running as registry registry.local:5000
, that we previously started.
- Create the k3d cluster mapping the local
registry
volume, whereregistries.yaml
is supposed to be:
$> k3d create -publish 80:80 --volume $PWD/registry:/etc/rancher/k3s
- Export the Kubernetes configuration file and wait for the cluster to be up and running
$> export KUBECONFIG=$(k3d get-kubeconfig)
- Connect the local registry (i.e. the container
registry.local
) to the docker network created by k3d:
$> docker network connect k3d-k3s-default registry.local
This must be done at least once, as this connection can be reused every time you delete and create a k3d cluster on the same environment, unless you delete it.
In this section we pactice with plain Kubernetes yaml files to install the same application with incremental levels of configurability:
- Using default settings
- Using a ConfigMap to set the environment variables
- Using a ConfigMap to replace the default htlm file
In the first Kubernetes example, we just create a Deployemnt
, a Service
and an Ingress
object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test1-deployment
labels:
app: test1-app
spec:
replicas: 1
selector:
matchLabels:
app: test1-app
template:
metadata:
labels:
app: test1-app
spec:
containers:
- name: test1-app
image: registry.lvh.me:5000/gjuljo/myapp:latest
ports:
- containerPort: 8001
env:
- name: MYAPP_USERNAME
value: Giulio
---
apiVersion: v1
kind: Service
metadata:
name: test1-service
labels:
app: test1-service
spec:
ports:
- port: 8000
targetPort: 8001
name: http
selector:
app: test1-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test1-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: test1.lvh.me
http:
paths:
- backend:
serviceName: test1-service
servicePort: 8000
Notice the following:
- the image name referes to the local registry;
- the environment variable
MYAPP_USERNAME
is set directly in theDeployment
object without using any additional indirection mechanism (i.e. aConfigMap
); - the
Ingress
object exports the hostnametest1.lvh.me
(automatically resolved as127.0.0,1
) makes the local ingress to listen at the Kubernetes port (i.e.80
).
$> kubectl create -f test1-default.yaml
This is what you get when you invoke the service:
$> curl -w "\n" test1.lvh.me
default --> mystart Giulio myend
You get the same result even if you use the ip address (127.0.0.1
) and set the Host
header with the expected hostname (i.e. test1.lvh.me
):
$> curl -w "\n" -H 'Host:test1.lvh.me' 127.0.0.1
default --> mystart Giulio myend
Now we also add a ConfigMap
to set the value for the other environment variables, MYAPP_START
and MYAPP_END
, that is referred in the Deploymenet
object:
kind: ConfigMap
apiVersion: v1
metadata:
name: test2-config
data:
MYAPP_START_KEY: "Hello"
MYAPP_END_KEY: "how are you?"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test2-deployment
labels:
app: test2-app
spec:
replicas: 1
selector:
matchLabels:
app: test2-app
template:
metadata:
labels:
app: test2-app
spec:
containers:
- name: test2-app
image: registry.lvh.me:5000/gjuljo/myapp:latest
ports:
- containerPort: 8001
env:
- name: MYAPP_USERNAME
value: Giulio
- name: MYAPP_START
valueFrom:
configMapKeyRef:
name: test2-config
key: MYAPP_START_KEY
- name: MYAPP_END
valueFrom:
configMapKeyRef:
name: test2-config
key: MYAPP_END_KEY
---
apiVersion: v1
kind: Service
metadata:
name: test2-service
labels:
app: test2-service
spec:
ports:
- port: 8000
targetPort: 8001
name: http
selector:
app: test2-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test2-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: test2.lvh.me
http:
paths:
- backend:
serviceName: test2-service
servicePort: 8000
In this second deployment, the environment variables MYAPP_START
and MYAPP_END
are, correspondigly, Hello
and how are you?
:
$> kubectl create -f test2-env.yaml
In this second test the ingress hostname is test2.lvh.me
:
$> curl -w "\n" test2.lvh.me
default --> Hello Giulio how are you?
You can even provide the contents of the hello.html
file, by using the same or additional ConfigMap
that includes the contents of the file itself and that can be mounted as a Volume
:
kind: ConfigMap
apiVersion: v1
metadata:
name: test3-config-vol
data:
hello.html: |
hello --> {{.Start}} {{.Username}} {{.End}}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: test3-config-env
data:
MYAPP_START_KEY: "Hello"
MYAPP_END_KEY: "how are you?"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test3-deployment
labels:
app: test3-app
spec:
replicas: 1
selector:
matchLabels:
app: test3-app
template:
metadata:
labels:
app: test3-app
spec:
containers:
- name: test3-app
image: registry.lvh.me:5000/gjuljo/myapp:latest
ports:
- containerPort: 8001
env:
- name: MYAPP_USERNAME
value: Giulio
- name: MYAPP_START
valueFrom:
configMapKeyRef:
name: test3-config-env
key: MYAPP_START_KEY
- name: MYAPP_END
valueFrom:
configMapKeyRef:
name: test3-config-env
key: MYAPP_END_KEY
volumeMounts:
- name: test3-vol
mountPath: /app/static
volumes:
- name: test3-vol
configMap:
name: test3-config-vol
---
apiVersion: v1
kind: Service
metadata:
name: test3-service
labels:
app: test3-service
spec:
ports:
- port: 8000
targetPort: 8001
name: http
selector:
app: test3-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test3-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: test3.lvh.me
http:
paths:
- backend:
serviceName: test3-service
servicePort: 8000
Create the service as follows:
$> kubectl create -f test3-vol.yaml
the usual invocation generates, this time, a different content:
$> curl -w "\n" test3.lvh.me
hello --> Hello Giulio how are you?
One or more environment variables can be handeld as confidential data and stored in a Secret
object and then used as an environment variable by a pod:
kind: ConfigMap
apiVersion: v1
metadata:
name: test4-config-vol
data:
hello.html: |
hello --> {{.Start}} {{.Username}} {{.End}}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: test4-config-env
data:
MYAPP_START_KEY: "Hello"
MYAPP_END_KEY: "how are you?"
---
apiVersion: v1
kind: Secret
metadata:
name: test4-secret
type: Opaque
stringData:
MYAPP_END_KEY: "this is secret"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test4-deployment
labels:
app: test4-app
spec:
replicas: 1
selector:
matchLabels:
app: test4-app
template:
metadata:
labels:
app: test4-app
spec:
containers:
- name: test4-app
image: registry.lvh.me:5000/gjuljo/myapp:latest
ports:
- containerPort: 8001
env:
- name: MYAPP_USERNAME
value: Giulio
- name: MYAPP_START
valueFrom:
configMapKeyRef:
name: test4-config-env
key: MYAPP_START_KEY
- name: MYAPP_END
valueFrom:
secretKeyRef:
name: test4-secret
key: MYAPP_END_KEY
volumeMounts:
- name: test4-vol
mountPath: /app/static
volumes:
- name: test4-vol
configMap:
name: test4-config-vol
---
apiVersion: v1
kind: Service
metadata:
name: test4-service
labels:
app: test4-service
spec:
ports:
- port: 8000
targetPort: 8001
name: http
selector:
app: test4-app
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test4-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: test4.lvh.me
http:
paths:
- backend:
serviceName: test4-service
servicePort: 8000
Create the service as follows:
$> kubectl create -f test4-secret.yaml
the usual invocation generates, this time, a different content:
$> curl -w "\n" test4.lvh.me
hello --> Hello Giulio this is a secret
Helm helps you to define, install and upgrade Kubernetes applications. Helm 3 has been recently released, but in this tutorial we stick to Helm 2 that you can install from the available releases.
Helm 2 requires you to define a ServiceAccount
and a ClusterRoleBinding
object in order to make Tiller to work in your cluster:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Just run the following command:
$> kubectl create -f helm/helm-rbac.yaml
This is actually equivalent to the following commands:
$> kubectl -n kube-system create serviceaccount tiller
$> kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
Once you have the Helm command in your path and RBAC is defined, yon initialize Tiller:
$> helm init --service-account tiller
Then just wait for Tiller to be up and running to check whether all the settings are correct.
Create a new chart using the following command:
$> helm create testchart
You can change/create the values.yaml
file for the Helm chart as follows:
- The image block should refer to your Docker image:
image:
repository: registry.lvh.me:5000/gjuljo/myapp
tag: latest
pullPolicy: IfNotPresent
- The ingress block should be enabled and provide a hostname (i.e.
test4.lvh.me
) with a path (i.e./
)
ingress:
enabled: true
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: test4.lvh.me
paths: [/]
- Add the hellofile block by specifing the relative path to the
hello.html
you want to use:
hellofile: hello.html
In this example, hello.html
is expected to be in the testchart
directory.
- Add the myEnv block by specifying values you want for the
MYAPP_START
andMYAPP_END
environment variables:
myEnv:
name: "Giulio"
start: "Ciao"
end: "how are you?"
In order to create the ConfigMap
objects needed by the application, add the following file in the testchart/templates
directory:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Chart.Name }}-config-vol
data:
hello.html: |-
{{ .Values.hellofile | b64dec | indent 4}}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Chart.Name }}-config-env
data:
MYAPP_START_KEY: {{ .Values.myEnv.start }}
MYAPP_END_KEY: {{ .Values.myEnv.end }}
Modify testchart/templates/deployment.yaml
by adding the volumes block:
volumes:
- name: config-volume
configMap:
name: {{ .Chart.Name }}-config-vol
Change also the containers block as follows:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8001
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
- name: MYAPP_USERNAME
valueFrom:
configMapKeyRef:
name: {{ .Chart.Name }}-config-env
key: MYAPP_USERNAME_KEY
- name: MYAPP_START
valueFrom:
configMapKeyRef:
name: {{ .Chart.Name }}-config-env
key: MYAPP_START_KEY
- name: MYAPP_END
valueFrom:
configMapKeyRef:
name: {{ .Chart.Name }}-config-env
key: MYAPP_END_KEY
volumeMounts:
- name: hello-volume
mountPath: /app/static
To install the Helm chart, run the following:
$> helm install --name mytestchart testchart
This is what you get when you invoke the service:
$> curl -w "\n" test4.lvh.me
hello1 --> Ciao Giulio how are you?
At install time, you can specify a different values file rather than using the one included in the chart itself:
$> helm install --name mytestchart testchart -f values/values3.yaml
This is what you get when you invoke the service:
$> curl -w "\n" test4.lvh.me
hello3 --> Hello Giulio3 how are you?
This command, instead, uses the default values.yaml
in the Helm chart and replaces the value of the hellofile
settings from the command line, with the base64 econding of the hello4.html
file calculated on the fly:
$> helm install --name mytestchart testchart --set hellofile=$(cat values/hello4.html | base64)
This is what you get when you invoke the service:
$> curl -w "\n" test4.lvh.me
hello4 --> Hello Giulio how are you?
You can even have multiple deployments of the same chart on the same namespace or different namespaces. Just pay attention to name of the chart and the ingress hostname to avoid collisions:
helm install --name mytestchart-prod testchart -f values/values3.yaml --set "ingress.hosts[0].host=test4-prod.lvh.me" --namespace prod
$> curl -w "\n" test4-prod.lvh.me
hello3 --> Hello Giulio3 how are you?
You can upgrade the Helm chart and restart the pods automatically in two different ways, either with an explicit command line option to the helm
command or using an annotation in the Deployment
object by calculating the hash of the ConfigMap
.
When you upgrade the Helm chart, because you changed a value in values.yaml
or a file mapped into a ConfigMap
, such as hello.html
, you can run the following command with the option --recreate-pods
that restarts the pods:
$> helm upgrade -f values/values2.yaml mytestchart testchart --recreate-pods
ATTENTION: this option is deprecated and will be removed in Helm 3. This option restarts all the pods at once without any rollout policy.
A different solution is to generate the checksum of the ConfigMap
in the annotations of the Deployment
, so every time you upgrade the Helm chart, the pods are automatically restarted when something changed.
Here follows the snipped of the annotations block you have to add to Deployment
:
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "testchart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
This is the regular upgrade command:
$> helm upgrade -f values/values2.yaml mytestchart testchart
To delete the Helm chart, run the following:
$> helm delete mytestchart --purge