/kubernetes-inside-proxmox-lxc

Universal instructions for running a K8s cluster with various Container Runtime inside a Proxmox LXC container.

Apache License 2.0Apache-2.0

K8s cluster inside LXC Proxmox

Language: 🇷🇺 · 🇺🇸

Instructions on how to deploy a working K8s cluster in a Proxmox LXC container.

Preface

Note

This instruction is made on the basis of several articles, official documents and my own practice.
All references to original sources are listed at the end of the README.

Tested on:

Proxmox:

Kernel Version: Linux 6.5.11-7-pve (2023-12-05T09:44Z)
Manager Version: pve-manager/8.1.3/b46aac3b42da5d15

Kubernetes:

kubectl v1.29.0
crictl v1.29.0
cri-dockerd v0.3.9

Proxmox preparation

Kernel modules

Let's plug into the kernel the recommended modules for running Docker containers and for K8s in general.

To do this, edit /etc/modules and add:

overlay
br_netfilter
ip_vs
nf_nat
xt_conntrack

I will describe a little bit why each module is needed:

  • overlay - a module to control the transfer of network traffic between different network interfaces or subnets.
  • br_netfilter - a module that provides traffic filtering at the bridge level.
  • ip_vs - module for IP Virtual Server (IPVS) protocol management, which allows creating high-performance and fault-tolerant network services.
  • nf_nat - a module that provides NAT (Network Address Translation) to redirect traffic between different network interfaces or subnets.
  • xt_conntrack - a module to monitor the state of TCP/UDP connections and manage them at the network protocol level.

To avoid rebooting the node, we activate the modules via the command:

modprobe <module>

Let's check the active modules:

lsmod | grep -E 'overlay|br_netfilter|ip_vs|nf_nat|xt_conntrack'

It is also recommended to run the command to update the existing initramfs image that is used at system boot:

update-initramfs -u

Note

The initramfs image contains kernel modules, device drivers, scripts, and utilities needed for the system to work properly at boot time.

Network traffic

Let's also make sure that iptables will correctly accept network traffic from all Proxmox nodes, for this purpose we will create a config with the permission to forward network traffic:

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

Apply the parameters with the command:

sysctl --system

Check the changes in the node:

sysctl net.ipv4.ip_forward net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables

Swap file

Disable the swap partition for the moment:

swapoff -a

Note

It is common for Kubernetes to turn off the swap partition so that the cluster can properly estimate free RAM and not have performance issues.

And comment out the lines so that on the next boot, it won't be turned on:

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Create LXC container

In the Proxmox UI, let's start creating the container via "Create CT".

General

Immediately check "Advanced" and uncheck "Unprivileged container". It should look like this:

  • Unprivileged container
  • Nesting
  • Advanced

Template

Here we choose the image to taste and color, my choice was the current version of Ubuntu.

Memory

Turn off swap in the container:

Swap (MiB): 0

Network

The future cluster has Network Policy and other traffic limiting tools, so Firewall can be turned off in my opinion:

  • Firewall

Make sure to give our node a static IP address so that it won't change after a while:

IPv4: Static
IPv4/CIDR: 192.168.0.10/24
Gateway (IPv4): 192.168.0.1

Naturally, this is just an example and you specify your local network.

DNS

If you have configured or have a separate DNS server at home, which is fine, it's best to specify it, if your router is the primary gateway and DNS server, then skip this tab.

Confirm

Let's take our time to start the container:

  • Start after created

Configure the LXC container

Now we need to prepare the container for proper operation of K8s cluster, you can install a convenient text editor in Proxmox node and in LXC container at once, I use vim as I know how to exit it:

apt install -y vim

Actions outside the container

First, let's shut down the container and log on Proxmox under root via SSH into the /etc/pve/lxc directory, and then edit <container id>.conf via text editor, where container id is the identifier of our LXC container.

Add lines to the file:

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"

Let me tell you a little about these parameters:

  • lxc.apparmor.profile: unconfined - sets the Apparmor profile for the container to "unconfined", which disables AppArmor in LXC.
  • lxc.cgroup.devices.allow: a - allows the container root access to cgroup.
  • lxc.cap.drop: - disables automatic disabling of some capabilities for the container, which may be useful for some applications, see LXC documentation for details.
  • lxc.mount.auto: "proc:rw sys:rw" - mounts the root partition /proc and /sys to R/W access for the container, which is usually necessary for the system to work correctly.

Now we need to flip the kernel boot configuration to the container, since kubelet uses the configuration to define the cluster environment settings.

Start the container and via root in Proxmox run the command:

pct push <container id> /boot/config-$(uname -r) /boot/config-$(uname -r)

Now let's create a symbolic reference for /dev/kmsg, since kubelet uses this for the journaling function, in LXC we have /dev/console for this, so we will reference it by creating a bash script in /usr/local/bin/conf-kmsg.sh:

#!/bin/sh -e
if [ ! -e /dev/kmsg ]; then
    ln -s /dev/console /dev/kmsg
fi

mount --make-rshared /

And let's configure the script to run once when the LXC container is started.

Create /etc/systemd/system/conf-kmsg.service with this content:

[Unit]
Description=Make sure /dev/kmsg exists

[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/usr/local/bin/conf-kmsg.sh
TimeoutStartSec=0

[Install]
WantedBy=default.target

Make our script executable and enable the service:

chmod +x /usr/local/bin/conf-kmsg.sh
systemctl daemon-reload
systemctl enable --now conf-kmsg

Customize the base environment

Let's update the installed packages and deliver those that will be useful for further customization:

apt update && apt upgrade -y
apt install -y wget curl conntrack

Let's remove the default firewall, because K8s uses other traffic management tools:

apt remove -y ufw && apt autoremove -y

kubectl

Let's put a tool to interact with the Kubernetes API server, manage Kubernetes resources and workflows:

curl -LO https://dl.k8s.io/release/`curl -LS https://dl.k8s.io/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl

Let's check the installed version:

kubectl version --client

helm

Let's put a Kubernetes package management tool that automates application deployment:

apt install -y git
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
rm get_helm.sh

Let's check the installed version:

helm version

The current version can be found in the Releases repository, but a more stable version for your OS may be installed.

crictl

Warning

Required for minikube, in other cases, install on demand.

Let's put a tool for managing containers and their resources in Kubernetes:

VERSION="v1.29.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-amd64.tar.gz

Let's check the installed version:

crictl -v

The current version can be found in the Releases repository.

containernetworking-plugins

Warning

Required for minikube, in other cases put it on demand.

Let's put a set of plugins for container networking in Kubernetes:

CNI_PLUGIN_VERSION="v1.4.0"
CNI_PLUGIN_TAR="cni-plugins-linux-amd64-$CNI_PLUGIN_VERSION.tgz"
CNI_PLUGIN_INSTALL_DIR="/opt/cni/bin"


curl -LO "https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGIN_VERSION/$CNI_PLUGIN_TAR".
mkdir -p "$CNI_PLUGIN_INSTALL_DIR"
tar -xf "$CNI_PLUGIN_TAR" -C "$CNI_PLUGIN_INSTALL_DIR"
rm "$CNI_PLUGIN_TAR"

You can see the current version in the Releases repository.

cri-dockerd

Warning

Required for minikube in conjunction with Docker, otherwise put it when using Docker as Container Runtime in K8s.

Let's put an adapter that provides compatibility between Docker Engine and CRI in Kubernetes:

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd_0.3.9.3-0.ubuntu-jammy_amd64.deb
dpkg -i cri-dockerd_0.3.9.3-0.ubuntu-jammy_amd64.deb
rm -f cri-dockerd_0.3.9.3-0.ubuntu-jammy_amd64.deb

Let's check the installed version:

cri-dockerd --version

The current version can be found in the Releases repository.

Install Container Runtime

Docker

Warning

Required for minikube, in other cases we install on demand.

Install dependencies, add apt repository to the system:

apt update
apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update

Install Docker via apt:

apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Let's check the installed version:

docker version

containerd

Easy and simple installation via apt:

apt install -y containerd

Let's check the installed version:

containerd --version

Create a folder for the configuration file:

mkdir /etc/containerd/

Set the default settings for the container configuration:

containerd config default > /etc/containerd/config.toml

To enable the use of cgroup, toggle the systemdCgroup parameter flag in /etc/containerd/config.toml:

sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

Let's also verify our changes:

cat /etc/containerd/config.toml | grep SystemdCgroup

CRI-O

Create variables with the current version of crio:

export OS=xUbuntu_22.04
export VERSION=1.24

The current version can be found at download.opensuse.org

Install dependencies, add apt repository to the system:

apt install -y gnupg

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

apt update

Installing crio via apt:

apt install -y cri-o cri-o-runc

Check the installed version:

crio -v

Now we need to disable AppArmor for crio:

sed -i 's/# apparmor_profile =\ "crio-default"/apparmor_profile \= "unconfined"/g' /etc/crio/crio.conf

Copy the config to work in minikube:

cp /etc/crio/crio.conf /etc/crio/crio.conf.d/02-crio.conf

Check the changes:

cat /etc/crio/crio.conf /etc/crio/crio.conf.d/02-crio.conf | grep apparmor_profile

Start crio and add it to autorun:

systemctl enable --now crio

Install Kubernetes

minikube

Container Runtime

The choice of container for Kubernetes (Container Runtime) depends on your requirements and preferences, but the most common containers for Kubernetes are Docker, containerd and CRI-O.

  1. Docker is the most common container included in most Kubernetes distributions.

  2. containerd is the second most popular container, also commonly used with Kubernetes.

  3. CRI-O - a container specifically designed to conform to the Kubernetes container interface (CRI).

To make up my mind, I created clusters of K8s under the same conditions with different Container Runtime, and this is what I got:

Container Runtime Creating time (seconds)
Docker ~25
containerd ~22
CRI-O ~16

Installation

Download the package and install:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
install minikube-linux-amd64 /usr/local/bin/minikube
rm -f minikube-linux-amd64

Let's install the recommended dependencies:

apt install -y ethtool socat

Docker

Now it's safe to start the cluster:

minikube start --vm-driver=none --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --kubernetes-version=v1.29.0 --container-runtime=docker

containerd.

Running minikube via containerd requires docker-cli. As per the instructions above for Docker, do:

apt update
apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update

And just install docker-cli:

apt install -y docker-ce-cli

Now it's safe to start the cluster:

minikube start --vm-driver=none --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --kubernetes-version=v1.29.0 --container-runtime=containerd

crio

To run minikube via crio requires docker-cli.

As per the instructions above for Docker, do:

apt update
apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update

And just put docker-cli:

apt install -y docker-ce-cli

Now it's safe to start the cluster:

minikube start --vm-driver=none --extra-config=kubeadm.ignore-preflight-errors=SystemVerification --kubernetes-version=v1.29.0 --container-runtime=crio

Removal

If you want to delete the cluster, just execute:

minikube delete

If you want to delete all profiles, then:

minikube delete --all

If you need to delete minikube, the only options are:

rm -f /usr/local/bin/minikube

microk8s

Installation

Let's install the service that manages snap packages:

apt install -y snapd

Install microk8s through it:

snap install microk8s --classic

Add a user to the microk8s group:

usermod -a -G microk8s $USER
chown -f -R $USER ~/.kube

Preferably, log in as a normal user:

su - $USER

See the status of the cluster:

microk8s status --wait-ready

Let's create an alias to avoid typing microk8s to work through kubectl:

alias kubectl='microk8s kubectl'

Removal

To delete microk8s execute:

snap remove microk8s

Don't forget the alias:

unalias kubectl

K3s

Installation

Pretty straightforward installation here:

curl -sfL https://get.k3s.io | sh -

Create an alias to avoid typing k3s to work through kubectl:

alias kubectl='k3s kubectl'

Or overwrite the kubectl configs:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Removal

To uninstall K3s, simply execute:

/usr/local/bin/k3s-uninstall.sh

Also, don't forget the alias:

unalias kubectl

k0s

Installation

Here's a pretty straightforward installation:

curl -sSLf https://get.k0s.sh | sh

Install as a service:

k0s install controller --single

Start the cluster:

k0s start

Check the status of the cluster:

k0s status

Create an alias to avoid typing k0s to work through kubectl:

alias kubectl='k0s kubectl'

Removal

Stop the service:

k0s stop

Remove the k0s service and all dependencies:

k0s reset

If you need to remove k0s, the only options are:

rm -f /usr/local/bin/k0s

Also don't forget about alias:

unalias kubectl

Test any cluster to see if it's working properly

Execute:

kubectl get nodes && \
echo && \
kubectl get services && \
echo && \
kubectl get pods -A

The output should be the current state of the cluster, and if there is one, and STATUS = Ready, congratulations.

Verify the network health of any cluster

Create an deployment:

kubectl create deployment hello-world --image=registry.k8s.io/echoserver:1.10

Create a service for the deployment:

kubectl expose deployment hello-world --type=NodePort --port=8080

Watch the feed start up and also watch its NodePort:

kubectl get pods -o wide
kubectl get service

Look for this: 8080:XXXXX/TCP, where XXXXX is the NodePort.

Let's check the availability of the pod outside of Proxmox, on your laptop, run a curl query:

curl <ip adress>:XXXXX

Where <ip adress> is the IP address of the LXC container, and XXXXX is the external port of our pod.

The response request should come out like this:

Hostname: hello-world-576c8bfdf8-c269c

Pod Information: -no pod information available-.

Server values: server_version=nginx: 1.13.3 - lua: 10008

And so on.

After successful checks, let's remove the test deployment and the service.

kubectl delete services hello-world
kubectl delete deployment hello-world

This is the end of K8s deployment in LXC.

Materials used

I highly recommend reading the resources below for a broader understanding of all the processes:

Installing Docker Engine

Installing Kubernetes

Installing minikube

Installing microk8s

Installing kind

Installing K3s

Installing k0s

Installing cri-o

Setting up cgroup

Article Garrett Mills' blog

Instructions for running microk8s in LXD

Instructions for running Kubernetes in redOS

Plans

I will try to complete the article, both for myself and for all of you.

Here are my current plans for it:

  • Launch k0s
  • Launch kind (currently unable to launch)
  • Raise a cluster via kubeadm
  • Set up cri-o support for minikube

Feel free to post your ideas in Discussions.