For any issues please contact stenlytu@gmail.com or create a PR. Tested with K8s version 1.17.4 and kubectl version 1.18.0.
The goal of this tutorial is to give you good understanding of Kubernetes.
To achieve this we are going to need running K8s cluster.
During the tutorial every user is going to create personal namespace and execute all exercises there.
- Docker is a must. You can start with the book Docker in Action.
- Check the free K8s courses in EDX: https://www.edx.org/course/introduction-to-kubernetes
- The book Kubernetes in action gives good overview.
- Also check for available K8s courses in pluralsight: https://app.pluralsight.com/paths/skill/kubernetes-administration
- And ofc https://kubernetes.io/docs/home/
Download the kubeconfig file from your cluster and configure kubectl to use it.
export KUBECONFIG=/path/to/the/kubeconfig.yaml
-
Create namespace with your I-USER as a name. All following commands will be run in this namespace if not specified.
show
kubectl create ns i353953
Take-away: always try to use shortnames. To find the shortname of resource run -> kubectl api-resources | grep namespaces
-
Create 2 pods with names nginx1 and nginx2 into your namespace. All of them should have the label app=v1.
show
kubectl run -n i353953 nginx1 --image=nginx --restart=Never --labels=app=v1 kubectl run -n i353953 nginx2 --image=nginx --restart=Never --labels=app=v1
Take-away: Try to learn most important kubectl run options which can save you a lot of time and manual work on yaml files.
-
Change the labels of pod 'nginx2' to be app=v2.
show
kubectl -n i353953 label po nginx2 app=v2 --overwrite
Take-away: use --overwrite when changind labels.
-
Get only pods with label 'app=v2' from all namespaces.
show
kubectl get pods --all-namespaces=true -l app=v2
Take-away: -l can be used to filter resources by labels.
-
Remove the nginx pods to clean your namespace.
show
kubectl -n i353953 delete pod nginx{1,2}
-
Create a messaging pod using redis:alpine image with label set to tier=msg. Check pod's labels.
show
kubectl run -n i353953 messaging --image redis:alpine -l tier=msg
kubectl -n i353953 describe pod messaging| head Name: messaging Namespace: i353953 Priority: 0 Node: ip-10-250-13-141.eu-central-1.compute.internal/10.250.13.141 Start Time: Sun, 19 Apr 2020 16:25:19 +0300 Labels: tier=msg Annotations: cni.projectcalico.org/podIP: 100.96.1.4/32 cni.projectcalico.org/podIPs: 100.96.1.4/32 kubernetes.io/psp: extensions.gardener.cloud.provider-aws.csi-driver-node Status: Running
Take-away: Use -l alongside kubectl run to create pods with specific label.
-
Create a service called messaging-service to expose the messaging application within the cluster on port 6379 and describe it.
show
kubectl -n i353953 expose pod messaging --name messaging-service --port 6379
$ kubectl -n i353953 describe svc messaging-service Name: messaging-service Namespace: i353953 Labels: tier=msg Annotations: <none> Selector: tier=msg Type: ClusterIP IP: 100.67.250.244 Port: <unset> 6379/TCP TargetPort: 6379/TCP Endpoints: 100.96.0.20:6379 Session Affinity: None Events: <none>
Take-away: kubectl expose is easy way to create service automatically when applicable.
-
Create a busybox-echo pod that echoes 'hello world' and exits. After that check the logs.
show
kubectl -n i353953 run busybox-echo --image=busybox --command -- echo "Hello world" kubectl -n i353953 logs busybox-echo
Take-away: with --command we can execute commands from within the container.
-
Create an nginx-test pod and set an env value as 'var1=val1'. Check the env value existence within the pod.
show
kubectl -n i353953 run nginx-test --image=nginx --env=var1=val1 kubectl -n i353953 exec -it nginx-test -- env # should see var1=val1 in the output
-
Create a deployment named hr-app using the image nginx:1.7.8 with 2 replicas.
show
kubectl -n i353953 create deployment hr-app --image=nginx:1.7.8 --dry-run=client -o yaml > deploy.yaml vi deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: hr-app name: hr-app namespace: i353953 spec: replicas: 2 # Change to 2 selector: matchLabels: app: hr-app strategy: {} template: metadata: creationTimestamp: null labels: app: hr-app spec: containers: - image: nginx:1.7.8 name: nginx resources: {} status: {}
kubectl apply -f deploy.yaml
Take-away: --dry-run=client is used to check if the resource can be created. Adding -o yaml > filename.yaml redirects the raw output to file.
-
Scale hr-app deployment to 3 replicas.
show
kubectl -n i353953 scale deploy/hr-app --replicas 3
Take-away: resource_type/resource_name syntax can also be used.
-
Update the hr-app image to nginx:1.7.9.
show
kubectl -n i353953 set image deploy hr-app nginx=nginx:1.7.9
Take-away: You can also edit the deployment manually with kubectl -n i353953 edit deploy/hr-app
-
Check the rollout history of hr-app and confirm that the replicas are OK.
show
kubectl -n i353953 rollout history deploy hr-app kubectl -n i353953 get deploy hr-app kubectl -n i353953 get rs # check that a new replica set has been created kubectl -n i353953 get po -l app=hr-app
-
Undo the latest rollout and verify that new pods have the old image (nginx:1.7.8)
show
kubectl -n i353953 rollout undo deploy hr-app kubectl -n i353953 get po # select one of the 'Running' pods kubectl -n i353953 describe po hr-app-695f79495-6gfsw | grep -i Image: # should be nginx:1.7.8
-
Do an update of the deployment with a wrong image nginx:1.91 and check the status.
show
kubectl -n i353953 set image deploy/hr-app nginx=nginx:1.91 kubectl -n i353953 rollout status deploy hr-app kubectl -n i353953 get po # you'll see 'ErrImagePull'
-
Return the deployment to working state and verify the image is nginx:1.7.9.
show
kubectl -n i353953 rollout undo deploy hr-app kubectl -n i353953 describe deploy hr-app | grep Image: kubectl -n i353953 get pods -l app=hr-app
-
Shedule a nginx pod on specific node using NodeName.
show
Assigning Pods to Nodes documentation
Generate yaml file:
kubectl -n i353953 run nginx-nodename --image nginx --dry-run=client -o yaml > nodename.yaml
Choose one of the nodes(kubectl get nodes) and edit the file:
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx-nodename name: nginx-nodename spec: nodeName: <node_name> # add containers: - image: nginx name: nginx-nodename resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
Create the pod and check where the pod was scheduled.
Hint: Use '-o wide' to check on which node the pod landed.
-
Schedule a nginx pod based on node label using nodeSelector.
show
Assigning Pods to Nodes documentation
Pick one of the nodes and check for hostname label.
kubectl describe node <node-name> | grep hostname
Generate yaml file and add nodeSelector field with above label as described into the documentation.
Check if the pod has landed at the correct node.
Take-away: use nodeSelector when you want to schedule pods only on nodes with specific labels.
-
Taint a node with key=spray, value=mortein and effect=NoSchedule. Check that new pods are not scheduled on it.
show
Taint and Toleration documentation
kubectl taint nodes <node-name> spray=mortein:NoSchedule
Create nginx pod and check that it's not scheduled onto the tainted node.
Take-away: A taint allows a node to refuse pod to be scheduled unless that pod has a matching toleration.
-
Create another pod called nginx-toleration with nginx image, which tolerates the above taint.
show
Taint and Toleration documentation
Use the documentation to figure out the yaml and check that the pod has landed onto the tainted node.
Delete the pod and remove the taint from the node. Use the following to remove the taint:
kubectl taint nodes <node-name> spray=mortein:NoSchedule-
Take-away: Pods can be scheduled on taint nodes if they tolerate the taint.
-
Create a DaemonSet using image fluentd-elasticsearch:1.20.
show
Use this yaml or try to make it alone from the documentation.
apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null labels: app: elastic-search name: elastic-search namespace: i353953 spec: selector: matchLabels: app: elastic-search template: metadata: creationTimestamp: null labels: app: elastic-search spec: containers: - image: k8s.gcr.io/fluentd-elasticsearch:1.20 name: fluentd-elasticsearch resources: {}
Take-away: A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
-
Add label color=blue to one node and create nginx deployment called blue with 5 replicas and node Affinity rule to place the pods onto the labeled node.
show
Affinity and anti-affinity documentation
kubectl label node <node-name> color=blue
Generate your own yaml or use this one. There is something wrong with it.
apiVersion: apps/v1 kind: Deployment metadata: name: blue spec: replicas: 5 selector: matchLabels: run: nginx template: metadata: labels: run: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx affinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: color operator: In values: - blue
Check that all pods are scheduled onto the labeled node.
Take-away: Node affinity is a set of rules used by the scheduler to determine where a pod can be placed.
-
Create a configmap named my-config with values key1=val1 and key2=val2. Check it's values.
show
kubectl -n i353953 create configmap my-config --from-literal=key1=val1 --from-literal=key2=val2 kubectl -n i353953 get cm my-config -o yaml
Take-away: ConfigMap gives you a way to inject configurational data into your application.
-
Create a configMap called 'opt' with value key5=val5. Create a new nginx-opt pod that loads the value from key 'key5' in an env variable called 'OPTIONS'.
show
Use the documentation to figure out the yaml file.
kubectl -n i353953 exec -it nginx-opt -- env | grep OPTIONS # should return val5
Take-away: ConfigMap is namespaced resource.
-
Create a configmap 'anotherone' with values 'var6=val6' and 'var7=val7'. Load this configmap as an env variables into a nginx-sec pod.
show
kubectl -n i353953 exec -it nginx-sec -- env | grep var # should return var6=val6\nvar7=val7
-
Create a configMap 'cmvolume' with values 'var8=val8' and 'var9=val9'. Load this as a volume inside an nginx-cm pod on path '/etc/spartaa'. Create the pod and 'ls' into the '/etc/spartaa' directory.
show
Hints: create the CM and use --dry-run=client to generate the yaml. After that add the corresponding fieds to the yaml.
kubectl -n i353953 exec -it nginx-cm -- ls /etc/spartaa # should return var8 var9
-
Create an nginx pod with requests cpu=100m, memory=256Mi and limits cpu=200m, memory=512Mi.
-
Create a secret called mysecret with values password=mypass and check its yaml.
show
kubectl -n i353953 create secret generic mysecret --from-literal=password=mypass
Take-away: Secrets are base64 encoded not encrypted -> bXlwYXNz.
-
Create an nginx pod that mounts the secret mysecret in a volume on path /etc/foo.
show
Hint: The approach is similar to configMaps.
Take-away: Secret is namespaced resource.
-
Get the list of nodes in JSON format and store it in a file.
show
kubectl get nodes -o json > brahmaputra.json
Take-away: Check what other output formats are available.
-
Get CPU/memory utilization for nodes.
show
kubectl top nodes
Take-away: kubectl top pods --all-namespaces=true can be used for pods.
-
Create an nginx pod with a liveness probe that just runs the command 'ls'. Check probe status.
show
kubectl -n i353953 run nginx-live --image=nginx --dry-run=client -o yaml > pod_liveness.yaml vi pod_liveness.yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: nginx-live name: nginx-live spec: containers: - image: nginx name: nginx-live resources: {} livenessProbe: # add exec: # add command: # add - ls # add dnsPolicy: ClusterFirst restartPolicy: Always status: {}
kubectl -n i353953 apply -f pod.yaml kubectl -n i353953 describe pod nginx-live
Take-away: The kubelet uses liveness probes to know when to restart a container.
-
Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80.
show
Configure Liveness, Readiness Probes
kubectl -n i353953 run nginx-ready --image=nginx --dry-run=client -o yaml --port=80 > pod_readiness.yaml
Find what needs to be added to the file from the above documentation.
Take-away: K8s uses readiness probes to decide when the container is available for accepting traffic.
-
Use JSON PATH query to retrieve the osImages of all the nodes.
show
kubectl get nodes -o jsonpath="{.items[*].status.nodeInfo.osImage}"
You should see: Container Linux by CoreOS 2303.3.0 (Rhyolite)
Take-away: Try to understand the construct of the query.
-
Create a PersistentVolume of 1Gi, called 'myvolume-i353953'. Make it have accessMode of 'ReadWriteOnce' and 'ReadWriteMany', storageClassName 'normal', mounted on hostPath '/etc/foo'. List all PersistentVolume
show
PersistentVolume documentation
Use the following output:
kind: PersistentVolume apiVersion: v1 metadata: name: myvolume-i353953 spec: storageClassName: normal capacity: storage: 1Gi accessModes: - ReadWriteOnce - ReadWriteMany hostPath: path: /etc/foo
kubectl get pv # status should be Available
Take-away: PersistentVolume is not namespaced resource.
-
Create a PersistentVolumeClaim called 'mypvc-i353953' requesting 400Mi with accessMode of 'ReadWriteOnce' and storageClassName of normal. Check the status of the PersistenVolume.
show
Use PersistentVolumeClaim documentation to figure out the correct yaml.
The status of the PersistentVolume myvolume-i353953 should be bound.
Take-away: PersistentVolumeClaim is namespaced resource.
-
Create a busybox pod with command 'sleep 3600'. Mount the PersistentVolumeClaim mypvc-i353953 to '/etc/foo'. Connect to the 'busybox' pod, and copy the '/etc/passwd' file to '/etc/foo/passwd'.
-
Create a second pod which is identical with the one you just created (use different name). Connect to it and verify that '/etc/foo' contains the 'passwd' file. Delete the pods
show
Nope
-
Create busybox-user pod that runs sleep for 1 hour and has user ID set to 101. Check the UID from within the container.
show
Security Context documentation
kubectl -n i353953 run busybox-user --image=busybox --command sleep 3600 --dry-run=client -o yaml > pod.yaml vi pod.yaml
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox-user name: busybox-user spec: securityContext: # Add runAsUser: 101 # Add containers: - command: - sleep - "3600" image: busybox name: busybox-user resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
kubectl -n i353953 exec -it busybox-user -- id -u # should return 101
-
Create the YAML for an nginx pod that has capabilities "NET_ADMIN" and "SYS_TIME".
-
Create a new service account with the name pvviewer-IUSER. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role-IUSER and ClusterRoleBinding called pvviewer-role-binding-IUSER
show
kubectl create serviceaccount pvviewer-i353953 kubectl create clusterrole pvviewr-role-i353953 --resource=pv --verb=list kubectl create clusterrolebinding pvviewer-role-binding-i353953 --clusterrole=pvviewr-role-i353953 --serviceaccount=default:pvviewer-i353953
Take-away: Read the documentation and try to understand more for Role, ClusterRole, RoleBinding and ClusterRoleBinding.
-
Create a pod with image nginx called nginx-1 and expose its port 80.
show
kubectl -n i353953 run nginx-1 --image=nginx --port=80 --expose
Check that both pod and service are created.
Take-away: --expose can be really handy for basic services.
-
Get service's ClusterIP, create a temp busybox-1 pod and 'hit' that IP with wget.
show
kubectl -n i353953 get svc nginx-1 kubectl -n i353953 run busybox-1 --rm --image=busybox -it -- sh / # wget -O- $CLUSTER_IP:80
Take-away: ClusterIP is only reachable from within the cluster.
-
Convert the ClusterIP to NodePort for the same service and find the NodePort. Hit the service(create temp busybox pod) using Node's IP and Port.
show
kubectl -n i353953 edit svc nginx-1 # ClusterIP -> NodePort kubectl -n i353953 describe svc nginx-1 # find NodePort
Create temp busybox pod and execute the following:
/ # wget -O- $NODE_IP:$NODE_PORT
-
Create an nginx-last deployment of 2 replicas, expose it via a ClusterIP service on port 80. Create a NetworkPolicy so that only pods with labels 'access: granted' can access the deployment.
show
Create the deployment and expose it.
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx spec: podSelector: matchLabels: run: nginx-last # selector for the pods ingress: # allow ingress traffic - from: - podSelector: # from pods matchLabels: # with this label access: granted
Apply the above yaml and test with temporary busybox pods.
kubectl -n i353953 run busybox --image=busybox --rm -it -- wget -O- http://nginx-last:80 --timeout 2 # This should fail kubectl -n i353953 run busybox --image=busybox --rm -it --labels=access=granted -- wget -O- http://nginx-last:80 --timeout 2 # This should work
Take-away: With NetworkPolicy you can configure how groups of pods are allowed to communicate with each other.
-
Create an nginx pod called nginx-resolver using image nginx, expose it internally with a service called nginx-resolver-service. Test that you are able to look up the service and pod names from within the cluster. Use the image: busybox:1.28 for dns lookup.
show
You need to figure it out alone :)
-
List the InternalIP of all nodes of the cluster.
show
Hint: use jsonpath
-
Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image redis:alpine with toleration to be scheduled on node01.
-
Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod. Use volumeMount with mountPath = /data/redis.
-
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Record the version. Next upgrade the deployment to version 1.17 using rolling update. Make sure that the version upgrade is recorded in the resource annotation.
kubectl delete ns i353953