This installs a test Kubernetes cluster in vagrant using virtualbox hosts..
You can setup the cluster and kubectl context using the setup.sh
script. This configures 1 Master node and 3 Worker nodes. You can change the number of worker nodes in Vagrantfile
by updating the value of NODE_COUNT
.
$ ./setup.sh -h
Kubernetes cluster setup on vagrant.
Usage:
setup.sh [-h|--help] [-n|--networking <flannel|calico|canal|weavenet>] [-c|--host-count <n>]
Arguments:
-h|--help Print usage
-n|--networking <flannel|calico|canal|weavenet> Kubernetes networking model to use [Default: flannel]
-c|--host-count <n> Number of worker nodes [Default: 2]
Examples:
./setup.sh
./setup.sh -n calico
./setup.sh -n weavenet -c 3
You can destroy the cluster and kubectl config using destroy.sh
script.
$ sh destroy.sh
- If you see an error as below while running the setup
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: 0%...
Progress state: NS_ERROR_FAILURE
Do a reinstall of Virtualbox and allow Oracle
from System Preferences > Security & Privacy
- Using Virtualbox 6.1.28 onwards need more configuration for host-only network. Details here also the Changelog
Create a file /etc/vbox/networks.conf
with allowed IP ranges for Virtualbox.
$ cat /etc/vbox/networks.conf
* 172.28.128.0/24
* 192.168.56.0/24
- Install containerd
# curl -sSLO https://github.com/containerd/containerd/releases/download/v1.6.14/containerd-1.6.14-linux-amd64.tar.gz
# tar xzvf containerd-1.6.14-linux-amd64.tar.gz -C /usr/local
- Install runc
# curl -sSLO https://github.com/opencontainers/runc/releases/download/v1.1.3/runc.amd64
# install -m 755 runc.amd64 /usr/local/sbin/runc
- Install CNI plugins
# curl -sSLO https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
# mkdir -p /opt/cni/bin
# tar xzvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin
- Configure containerd
# mkdir /etc/containerd
# containerd config default | sudo tee /etc/containerd/config.toml
# sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
# curl -L https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -o /etc/systemd/system/containerd.service
# systemctl daemon-reload
# systemctl enable --now containerd
# systemctl status containerd
- Configure kubelet to use containerd as runtime
Edit file /var/lib/kubelet/kubeadm-flags.env
and add --container-runtime=remote
and --container-runtime-endpoint=unix:///run/containerd/containerd.sock
.
kubeadm
toold stores the CRI socket for each host as an annotation in the Node object. To change it you can execute the following command
$ kubectl edit node <node-name>
in the editor change the value of kubeadm.alpha.kubernetes.io/cri-socket
from /var/run/dockershim.sock
to CRI socket path of your choice, in this case (unix:///run/containerd/containerd.sock
) and save the change.
Restart kubelet
# systemctl restart kubelet
Check if the runtime is changed
# kubectl get nodes -o wide
From any node run the below to test container runtime
$ critest -parallel 10 -ginkgo.succinct
Tested with below versions of the apps
- Vagrant 2.3.4 (2.3.2 or higher required with Virtualbox 7.x)
- VirtualBox 6.1.40/7.0.4 (6.1.28 and higher versions have issue with host-only network. Pls check the troubleshooting section for details)
- yq 4.6.1
- ubuntu/xenial64 (v20210623.0.0)
- ubuntu/bionic64 (v20220317.0.0)