vagrant-kubernetes
Kubernetes Cluster from scratch in Vagrant
Overview
The reason behind this was to gain a greater understanding of how Kubernetes fits together to then figure out a deployment strategy via the usual methods e.g. Salt, Userdata. As it stands it's super basic and was purely for learning so it doesn't do much past bootstrapping at the moment.
Using the docs on kubernetes.io I was able to piece this cluster together.
As per Kubernetes' instructions I have installed the kubelet
and docker binaries, and then configured all the other components via static pod manifests (see the files/manifests
directory) and also via kubectl create -f
. This allows for a very clean and repeatable bootstrap experience. I have used the hyperkube
Docker image that contains the hyperkube
all-in-one binary, which means you can run all your components with just the one binary e.g. kube-proxy
, kube-apiserver
, kube-controller-manager
and kube-scheduler
.
SSL
With thanks to kelseyhightower I was able to create valid self signed certs via his repo docker-kubernetes-tls-guide.
You just need to clone the repo, install the CFSSL tool, edit the relevant json files and create your SSL certs.
I have added the *.json
files used to generate the SSL certs with CFSSL
for reference which can be found in files/certs
.
Reading
See below for some links I used to help build this:
- Creating a Custom Cluster from Scratch
- Building High-Availability Clusters
- etcd Cluster Guide
- Kubernetes The Hard Way
Prerequisites
To run this you will need installed:
Usage
Using Vagrant I spin up 2 nodes, one master (master.kubernetes.com (10.0.0.10)
) and one worker (worker.kubernetes.com (10.0.0.11)
).
To start the cluster, you just need run vagrant.
vagrant up
If you just want to bring up a single node e.g. master, you can specify the individual node vagrant up master
.
Once provisioned you can log into each box and play around with the functionality of Kubernetes.
vagrant ssh master
vagrant ssh worker
If everything has successfully provisioned, when you run journaltcl -u kubelet.service
the logs should look like this.
May 19 14:57:39 master kubelet[9406]: I0519 14:57:39.831847 9406 kubelet_node_status.go:77] Attempting to register node master
May 19 14:57:39 master kubelet[9406]: I0519 14:57:39.845771 9406 kubelet_node_status.go:80] Successfully registered node master
May 19 14:57:49 master kubelet[9406]: I0519 14:57:49.868895 9406 kuberuntime_manager.go:902] updating runtime config through cri with podcidr 10.10.0.0/24
May 19 14:57:49 master kubelet[9406]: I0519 14:57:49.869405 9406 docker_service.go:277] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.10.0.0/24,},}
May 19 14:57:49 master kubelet[9406]: I0519 14:57:49.869757 9406 kubelet_network.go:326] Setting Pod CIDR: -> 10.10.0.0/24
You can then use kubectl
to have a play with the kube-apiserver
.
kubectl get all --all-namespaces -o wide
The Kubernetes Dashboard will be available when the cluster has successfully converged. The web page will require authentication which is username: admin
and password: password
. Use the link below to access the dashboard:
Service | URL |
---|---|
Kube Dashboard | https://localhost:6443/ui |
Node Exporter | http://localhost:9100/metrics |
Kube State Metrics | http://localhost:9090/metrics |
When finished, you can destroy the cluster.
vagrant destroy -f
Troubleshooting
503 Service unavailable
If you recieve this error when fully provisioned, make sure both nodes are operational e.g. ssh or ping. If not you may need to restart the node, this is easily done via Vagrant.
vagrant reload <node>
To Do
- Secure communication between
kube-apiserver
>etcd
(There's a fix for the TLS handshake issue in version 1.7.0, but it's still in alpha) - Manage host files via vagrant-hosts instead of manually editing them on each machine
- Configure Prometheus and Grafana for monitoring
Access webpages e.g.- DONEKubernetes-dashboard
from guest on the hostConfigure Kube Dashboard- DONEConfigure the DNS add-on- DONEOverlay network with flannel using CNI plugin- DONEFix TLS/certificate issues with- DONEkube-apiserver
(currently using http)Use- DONE--kubeconfig
instead of--api-servers
for thekubelet
configUpgrade etcd from 2 > 3- DONEPods to only run on nodes, and not on the master, via the use of taints- DONE