This playbook will build an HA Kubernetes cluster with k3s
, kube-vip
and MetalLB via ansible
.
This is based on the work from this fork which is based on the work from k3s-io/k3s-ansible. It uses kube-vip to create a load balancer for control plane, and metal-lb for its service LoadBalancer
.
If you want more context on how this works, see:
π Documentation (including example commands)
πΊ Watch the Video
Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a HA Kubernetes cluster on machines running:
- Debian (tested on version 11)
- Ubuntu (tested on version 22.04)
- Rocky (tested on version 9)
on processor architecture:
- x64
- arm64
- armhf
-
Control Node (the machine you are running
ansible
commands) must have Ansible 2.11+ If you need a quick primer on Ansible you can check out my docs and setting up Ansible. -
You will also need to install collections that this playbook uses by running
ansible-galaxy collection install -r ./collections/requirements.yml
(importantβ) -
netaddr
package must be available to Ansible. If you have installed Ansible via apt, this is already taken care of. If you have installed Ansible viapip
, make sure to installnetaddr
into the respective virtual environment. -
server
andagent
nodes should have passwordless SSH access, if not you can supply arguments to provide credentials--ask-pass --ask-become-pass
to each command.
First create a new directory based on the sample
directory within the inventory
directory:
cp -R inventory/sample inventory/my-cluster
Second, edit inventory/my-cluster/hosts.ini
to match the system information gathered above
For example:
[master]
192.168.30.38
192.168.30.39
192.168.30.40
[node]
192.168.30.41
192.168.30.42
[k3s_cluster:children]
master
node
If multiple hosts are in the master group, the playbook will automatically set up k3s in HA mode with etcd.
Finally, copy ansible.example.cfg
to ansible.cfg
and adapt the inventory path to match the files that you just created.
This requires at least k3s version 1.19.1
however the version is configurable by using the k3s_version
variable.
If needed, you can also edit inventory/my-cluster/group_vars/all.yml
to match your environment.
Start provisioning of the cluster using the following command:
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
After deployment control plane will be accessible via virtual ip-address which is defined in inventory/group_vars/all.yml as apiserver_endpoint
ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini
You should also reboot these nodes due to the VIP not being destroyed
To copy your kube config
locally so that you can access your Kubernetes cluster run:
scp debian@master_ip:/etc/rancher/k3s/k3s.yaml ~/.kube/config
If you get file Permission denied, go into the node and temporarly run:
sudo chmod 777 /etc/rancher/k3s/k3s.yaml
Then copy with the scp command and reset the permissions back to:
sudo chmod 600 /etc/rancher/k3s/k3s.yaml
You'll then want to modify the config to point to master IP by running:
sudo nano ~/.kube/config
Then change server: https://127.0.0.1:6443
to match your master IP: server: https://192.168.1.222:6443
See the commands here.
Be sure to see this post on how to troubleshoot common problems
This playbook includes a molecule-based test setup. It is run automatically in CI, but you can also run the tests locally. This might be helpful for quick feedback in a few cases. You can find more information about it here.
This repo uses pre-commit
and pre-commit-hooks
to lint and fix common style and syntax errors. Be sure to install python packages and then run pre-commit install
. For more information, see pre-commit
This collection can now be used in larger ansible projects.
Instructions:
- create or modify a file
collections/requirements.yml
in your project
collections:
- name: ansible.utils
- name: community.general
- name: ansible.posix
- name: kubernetes.core
- name: https://github.com/techno-tim/k3s-ansible.git
type: git
version: master
- install via
ansible-galaxy collection install -r ./collections/requirements.yml
- every role is now available via the prefix
techno_tim.k3s_ansible.
e.g.techno_tim.k3s_ansible.lxc
This repo is really standing on the shoulders of giants. Thank you to all those who have contributed and thanks to these repos for code and ideas: