This repository includes steps for setting up Kubernetes on an Ubuntu machine using Vagrant. I basically wanted my own development cluster, but had only invested in a single server (womp womp)! To solve this problem.... virtual machines to the rescue! It is based on assets from mbaykara/k8s-cluster and this article.
$ git clone https://github.com/vsoch/k8s-vagrantWe need vagrant and virtual box
$ sudo apt-get update
$ sudo apt-get install -v vagrant virtualboxThen bring up the cluster!
$ vagrant upWe will next want to ssh in to our main and worker nodes.
$ vagrant ssh mainkubeadm ("Kube Admin") is installed, and you can use it to initialize the main node.
$ which kubeadm
/usr/bin/kubeadm
vagrant@main:~$ sudo kubeadm init --apiserver-advertise-address 192.168.33.13 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
...
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.33.13:6443 --token xxxxxxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxWhen it is finished (you see the last line above) you'll want to setup your local kube config file. If you've ever connected to an already running cluster, you likely used this:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/configYou can try this, and see that nodes aren't ready:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
main NotReady control-plane,master 7m23s v1.23.1todo going to try calico here instead
Apply the networking policy:
$ kubectl apply -f https://raw.githubusercontent.com/vsoch/k8s-vagrant/main/k8s/network/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds createdIf you see nodes are unhealthy, follow instructions in this post. Mine were healthy :)
$ kubectl get csThen exit back to your local machine.
For each worker, do:
# Example to connect to each one
$ vagrant ssh worker-1
$ vagrant ssh worker-2Note that this particular hash and token you'll need to copy from the main node output (the last line we saw above).
$ sudo kubeadm join 192.168.33.13:6443 --token xxxxxxxxxxxxxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxThen go back to the main node, and check your cluster again:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
main Ready control-plane,master 10m v1.23.1
worker-1 Ready <none> 3m11s v1.23.1
worker-2 Ready <none> 102s v1.23.1Now create a test deployment.
$ kubectl create deployment nginx --image=nginx --port 80
To access from the internet we have to expose it as follow
$ kubectl expose deployment nginx --port 80 --type=NodePortAnd then get the port address (it will be available from the worker nodes address in /etc/hosts)
$ kubectl describe services nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: <none>
Selector: app=nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.17.161
IPs: 10.111.17.161
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31412/TCP <----
Endpoints: 10.244.1.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>And get it!
$ curl http://192.168.33.14:31412
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>