A terminal based tool to install slimmer k8s distros on metal, with batteries included!
(Here's the same video with captions)
- Deploys Argo CD by default, so you can manage your entire lab using files in open source git repos
- Argo CD ships with a dashboard with a custom theme π
- Supports multiple k8s distros
- Specializes in using Bitwarden (though not required) to store sensitive values both locally and on your cluster (OpenBao coming soon!)
- Manages all your authentication needs centrally using Zitadel (self-hosted IAM/SSO) and Vouch (For using OAuth2 on sites that don't it)
- Supports initialization on a range of common self-hosted apps π±
- featured initialized apps such as Zitadel, Nextcloud, Matrix, and Home Assistant include backups and restores
- Lots o' docs
π sure to check out our full installation guide, but the gist of it is smol-k8s-lab
can be installed via pipx
(or brew
coming soon).
smol-k8s-lab
requires Python 3.11+ (and pipx). If you've already got both and other pre-reqs, you should be able to:
# install the CLI
pipx install smol-k8s-lab
# Check the help menu before proceeding
smol-k8s-lab --help
brew
is the future preferred installation method for macOS/Debian/Ubuntu, as this will also install any non-python prerequisites you need, so you don't need to worry about them. This method is new, so please let us know if anything isn't working for you.
# tap the special homebrew repo for our formula and install it
brew install small-hack/tap/smol-k8s-lab
Then you should be able to check the version and cli options with:
smol-k8s-lab --help
Checkout our TUI docs for more info on how to get started playing with smol-k8s-lab
:-)
After you've followed the installation instructions, if you're new to smol-k8s-lab
, initialize a new config file:
# we'll walk you through any configuration needed before
# saving the config and deploying it for you
smol-k8s-lab
Upgrading config from v4.x to v5.x
If you've installed smol-k8s-lab prior to v5.0.0
, please backup your old configuration, and then remove the ~/.config/smol-k8s-lab/config.yaml
(or $XDG_CONFIG_HOME/smol-k8s-lab/config.yaml
) file entirely, then run the following with either pip or pipx:
if using pip:
# this uninstalls the old smol-k8s-lab for python 3.11
pip3.11 uninstall smol-k8s-lab
# this installs smol-k8s-lab for python 3.12
pip3.12 install --upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
or if using pipx:
# this upgrades smol-k8s-lab
pipx upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
We have done a masive upgrade of the config file. You'll need to update your configs based on the details in #210 . The main changes are to the following (check each doc link for details):
- accessibility features
- k3s nodes section
- backups and restores
- sensitive values
- k9s has been removed in favor of run command (hint: you can still use k9s via run command)
Upgrading config from v3.7.1 to v4.x
If you've installed smol-k8s-lab prior to v4.0.0
, please backup your old configuration, and then remove the ~/.config/smol-k8s-lab/config.yaml
(or $XDG_CONFIG_HOME/smol-k8s-lab/config.yaml
) file entirely, then run the following with either pip or pipx:
if using pip:
# this upgrades smol-k8s-lab
pip3.11 install --upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
or if using pipx:
# this upgrades smol-k8s-lab
pipx upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
The main breaking changes between v3.7.1
and v4.0.0
are that we now default enable metrics on most apps. Because of this, you need to have the Prometheus ServiceMonitor CRD installed ahead of time. Luckily, we now provide that as an app as well :) If you deleted your config and created a new one, it will already be there, but if you want to reuse your old config, you can add the app like this:
apps:
prometheus_crds:
description: |
[link=https://prometheus.io/docs/introduction/overview/]Prometheus[/link] CRDs to start with.
You can optionally disable this if you don't want to deploy apps with metrics.
enabled: true
argo:
# secrets keys to make available to Argo CD ApplicationSets
secret_keys: {}
# git repo to install the Argo CD app from
repo: https://github.com/small-hack/argocd-apps
# path in the argo repo to point to. Trailing slash very important!
path: prometheus/crds/
# either the branch or tag to point at in the argo repo above
revision: main
# namespace to install the k8s app in
namespace: prometheus
# recurse directories in the provided git repo
directory_recursion: false
# source repos for Argo CD App Project (in addition to argo.repo)
project:
name: prometheus
source_repos:
- https://github.com/prometheus-community/helm-charts.git
destination:
# automatically includes the app's namespace and argocd's namespace
namespaces:
- kube-system
- prometheus
If using the default repos, please also disable directory directory_recursion for:
- your prometheus stack app
- zitadel
For all changes, please check out PR #206.
Upgrading config from v2.2.4 to v3.x
If you've installed smol-k8s-lab prior to v3.0.0
, please backup your old configuration, and then remove the ~/.config/smol-k8s-lab/config.yaml
(or $XDG_CONFIG_HOME/smol-k8s-lab/config.yaml
) file entirely, then run the following with either pip or pipx:
if using pip:
# this upgrades smol-k8s-lab
pip3.11 install --upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
or if using pipx:
# this upgrades smol-k8s-lab
pipx upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
The main breaking changes between v2.2.4
and v3.0
are as follows:
- home assistant has graduated from demo app to live app
You'll need to change apps.home_assistant.argo.path
to either home-assistant/toleration_and_affinity/
if you're using node labels and taints, or home-assistant/
if you're deploying to a single node cluster. Here's an example with no tolerations or node affinity:
apps:
home_assistant:
enabled: false
description: |
[link=https://home-assistant.io]Home Assistant[/link] is a home IOT management solution.
By default, we assume you want to use node affinity and tolerations to keep home assistant pods on certain nodes and keep other pods off said nodes. If you don't want to use either of these features but still want to use the small-hack/argocd-apps repo, first change the argo path to /home-assistant/ and then remove the 'toleration_' and 'affinity' secret_keys from the yaml file under apps.home_assistant.description.
argo:
secret_keys:
hostname: "home-assistant.coolestdogintheworld.dog"
repo: https://github.com/small-hack/argocd-apps
path: home-assistant/
revision: main
namespace: home-assistant
directory_recursion: false
project:
source_repos:
- http://jessebot.github.io/home-assistant-helm
destination:
namespaces:
- argocd
And here's an example for labeled and tainted nodes, where your pod can use tolerations and node affinity:
apps:
home_assistant:
enabled: false
description: |
[link=https://home-assistant.io]Home Assistant[/link] is a home IOT management solution.
By default, we assume you want to use node affinity and tolerations to keep home assistant pods on certain nodes and keep other pods off said nodes. If you don't want to use either of these features but still want to use the small-hack/argocd-apps repo, first change the argo path to /home-assistant/ and then remove the 'toleration_' and 'affinity' secret_keys from the yaml file under apps.home_assistant.description.
argo:
secret_keys:
hostname: "home-assistant.coolestdogintheworld.dog"
toleration_key: "blutooth"
toleration_operator: "Equals"
toleration_value: "True"
toleration_effect: "NoSchedule"
affinity_key: "blutooth"
affinity_value: "True"
repo: https://github.com/small-hack/argocd-apps
path: home-assistant/toleration_and_affinity/
revision: main
namespace: home-assistant
directory_recursion: false
project:
source_repos:
- http://jessebot.github.io/home-assistant-helm
destination:
namespaces:
- argocd
- new k3s feature for adding additional nodes
This feature changes k8s_distros.k3s.nodes
to be a dictionary so that you can include additional nodes for us to join to the cluster after we create it, but before we install apps. Here's an example of how you can add a new node to k3s on installation:
k8s_distros:
k3s:
enabled: false
k3s_yaml:
# if you enable MetalLB, we automatically add servicelb to the disable list
# enables encryption at rest for Kubernetes secrets
secrets-encryption: true
# disables traefik so we can enable ingress-nginx, remove if you're using traefik
disable:
- "traefik"
node-label:
- "ingress-ready=true"
kubelet-arg:
- "max-pods=150"
# nodes to SSH to and join to cluster. example:
nodes:
# name can be a hostname or ip address
serverfriend1.lan:
# change ssh_key to the name of a local private key to use
ssh_key: id_rsa
# must be node type of "worker" or "control_plane"
node_type: worker
# labels are optional, but may be useful for pod node affinity
node_labels:
- iot=true
# taints are optional, but may be useful for pod tolerations
node_taints:
- iot=true:NoSchedule
if you don't want to add any nodes, this is what you should change your nodes section to be:
k8s_distros:
k3s:
enabled: false
k3s_yaml:
# if you enable MetalLB, we automatically add servicelb to the disable list
# enables encryption at rest for Kubernetes secrets
secrets-encryption: true
# disables traefik so we can enable ingress-nginx, remove if you're using traefik
disable:
- "traefik"
node-label:
- "ingress-ready=true"
kubelet-arg:
- "max-pods=150"
# nodes to SSH to and join to cluster. example:
nodes: {}
- cert-manager now supports DNS01 challenge solver using the Cloudflare provider
This feature reworks the apps.cert_manager.init
and apps.cert_manager.argo.secret_keys
sections.
Here's an example of using the HTTP01 challenge solver, which would be the only previously supported challenge solver, so if you want everything to just work how it did before your config file should look like this:
apps:
cert_manager:
enabled: true
description: |
[link=https://cert-manager.io/]cert-manager[/link] let's you use LetsEncrypt to generate TLS certs for all your apps with ingress.
smol-k8s-lab supports optional initialization by creating [link=https://cert-manager.io/docs/configuration/acme/]ACME Issuer type[/link] [link=https://cert-manager.io/docs/concepts/issuer/]ClusterIssuers[/link] using either the HTTP01 or DNS01 challenge solvers. We create two ClusterIssuers: letsencrypt-staging and letsencrypt-staging.
For the DNS01 challange solver, you will need to either export $CLOUDFLARE_API_TOKEN as an env var, or fill in the sensitive value for it each time you run smol-k8s-lab.
Currently, Cloudflare is the only supported DNS provider for the DNS01 challenge solver. If you'd like to use a different DNS provider or use a different Issuer type all together, please either set one up outside of smol-k8s-lab. We also welcome [link=https://github.com/small-hack/smol-k8s-lab/pulls]PRs[/link] to add these features :)
# Initialize of the app through smol-k8s-lab
init:
# Deploys staging and prod ClusterIssuers and prompts you for
# values if they were not set. Switch to false if you don't want
# to deploy any ClusterIssuers
enabled: true
values:
# Used for to generate certs and alert you if they're going to expire
email: "you@emailsforfriends.com"
# choose between "http01" or "dns01"
cluster_issuer_acme_challenge_solver: http01
# only needed if cluster_issuer_challenge_solver set to dns01,
# currently only cloudflare is supported
cluster_issuer_acme_dns01_provider: cloudflare
sensitive_values: []
argo:
secret_keys: {}
# git repo to install the Argo CD app from
repo: "https://github.com/small-hack/argocd-apps"
# path in the argo repo to point to. Trailing slash very important!
path: "cert-manager/"
# either the branch or tag to point at in the argo repo above
revision: main
# namespace to install the k8s app in
namespace: "cert-manager"
# recurse directories in the provided git repo
directory_recursion: false
# source repos for cert-manager CD App Project (in addition to argo.repo)
project:
source_repos:
- https://charts.jetstack.io
destination:
# automatically includes the app's namespace and argocd's namespace
namespaces:
- kube-system
And here's how you'd use the new DNS01 feature (keep in mind you need to either provide a sensitive value each time you run smol-k8s-lab
, OR you need to export $CLOUDFLARE_API_TOKEN
as an env var prior to running smol-k8s-lab
):
apps:
cert_manager:
enabled: true
description: |
[link=https://cert-manager.io/]cert-manager[/link] let's you use LetsEncrypt to generate TLS certs for all your apps with ingress.
smol-k8s-lab supports optional initialization by creating [link=https://cert-manager.io/docs/configuration/acme/]ACME Issuer type[/link] [link=https://cert-manager.io/docs/concepts/issuer/]ClusterIssuers[/link] using either the HTTP01 or DNS01 challenge solvers. We create two ClusterIssuers: letsencrypt-staging and letsencrypt-staging.
For the DNS01 challange solver, you will need to either export $CLOUDFLARE_API_TOKEN as an env var, or fill in the sensitive value for it each time you run smol-k8s-lab.
Currently, Cloudflare is the only supported DNS provider for the DNS01 challenge solver. If you'd like to use a different DNS provider or use a different Issuer type all together, please either set one up outside of smol-k8s-lab. We also welcome [link=https://github.com/small-hack/smol-k8s-lab/pulls]PRs[/link] to add these features :)
# Initialize of the app through smol-k8s-lab
init:
# Deploys staging and prod ClusterIssuers and prompts you for
# values if they were not set. Switch to false if you don't want
# to deploy any ClusterIssuers
enabled: true
values:
# Used for to generate certs and alert you if they're going to expire
email: "you@emailsforfriends.com"
# choose between "http01" or "dns01"
cluster_issuer_acme_challenge_solver: dns01
# only needed if cluster_issuer_challenge_solver set to dns01
# currently only cloudflare is supported
cluster_issuer_acme_dns01_provider: cloudflare
sensitive_values:
# can be passed in as env vars if you pre-pend CERT_MANAGER_
# e.g. CERT_MANAGER_CLOUDFLARE_API_TOKEN
- CLOUDFLARE_API_TOKEN
argo:
secret_keys: {}
# git repo to install the Argo CD app from
repo: "https://github.com/small-hack/argocd-apps"
# path in the argo repo to point to. Trailing slash very important!
path: "cert-manager/"
# either the branch or tag to point at in the argo repo above
revision: main
# namespace to install the k8s app in
namespace: "cert-manager"
# recurse directories in the provided git repo
directory_recursion: false
# source repos for cert-manager CD App Project (in addition to argo.repo)
project:
source_repos:
- https://charts.jetstack.io
destination:
# automatically includes the app's namespace and argocd's namespace
namespaces:
- kube-system
Upgrading config from v1.x to v2.x
If you've installed smol-k8s-lab prior to v2.0.0
, please backup your old configuration, and then remove the ~/.config/smol-k8s-lab/config.yaml
(or $XDG_CONFIG_HOME/smol-k8s-lab/config.yaml
) file entirely, then run the following:
# this upgrades smol-k8s-lab
pip3.11 install --upgrade smol-k8s-lab
# this initializes a new configuration
smol-k8s-lab
The main difference between the old and new config files are for apps, we've added:
apps.APPNAME.description
- for adding a custom description, set it to whatever you likeapps.APPNAME.argo.directory_recursion
- so you can have bigger nested apps :)apps.APPNAME.argo.project.destination.namespaces
- control what namespaces are allowed for a project
And we've changed:
apps.APPNAME.argo.ref
toapps.APPNAME.argo.revision
apps.APPNAME.argo.project_source_repos
toapps.APPNAME.argo.project.source_repos
And we've REMOVED:
apps.APPNAME.argo.part_of_app_of_apps
- this was mostly used internally, we think
Here's an example of an updated cert-manager app with the new config:
apps:
cert_manager:
# ! NOTE: you currently can't set this to false. It is necessary to deploy
# most of our supported Argo CD apps since they often have TLS enabled either
# for pod connectivity or ingress
enabled: true
description: |
[link=https://cert-manager.io/]cert-manager[/link] let's you use LetsEncrypt to generate TLS certs for all your apps with ingress.
smol-k8s-lab supports initialization by creating two [link=https://cert-manager.io/docs/concepts/issuer/]ClusterIssuers[/link] for both staging and production using a provided email address as the account ID for acme.
# Initialize of the app through smol-k8s-lab
init:
# Deploys staging and prod ClusterIssuers and prompts you for
# cert-manager.argo.secret_keys if they were not set. Switch to false if
# you don't want to deploy any ClusterIssuers
enabled: true
argo:
secret_keys:
# Used for letsencrypt-staging, to generate certs
email: ""
# git repo to install the Argo CD app from
repo: "https://github.com/small-hack/argocd-apps"
# path in the argo repo to point to. Trailing slash very important!
path: "cert-manager/"
# either the branch or tag to point at in the argo repo above
revision: main
# namespace to install the k8s app in
namespace: "cert-manager"
# recurse directories in the provided git repo
directory_recursion: false
# source repos for cert-manager CD App Project (in addition to argo.repo)
project:
source_repos:
- https://charts.jetstack.io
destination:
# automatically includes the app's namespace and argocd's namespace
namespaces:
- kube-system
Note: this project is not officially affiliated with any of the below tooling or applications.
We always install the latest version of Kubernetes that is available from the distro's startup script.
Distro | Description |
---|---|
k3s |
The certified Kubernetes distribution built for IoT & Edge computing |
k3d |
TESTING PHASE k3s in docker π³ |
KinD |
kind is a tool for running local Kubernetes clusters using Docker container βnodesβ. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI. |
We tend to test first on k3s first, then the other distros. k3d support coming soon.
All of these can be disabled with the exception of Argo CD, which is optional, but if not installed, smol-k8s-lab
will only install: MetalLB, nginx-ingress, and cert-manager.
Application | Description | Initialization Supported |
---|---|---|
metallb |
Loadbalancer and IP Address pool manager for metal | β |
ingress-nginx |
The ingress controller allows access to the cluster remotely, needed for web traffic | β |
cert-manager |
For SSL/TLS certificates | β |
Argo CD |
Gitops - Continuous Deployment | β |
Argo CD Appset Secret Plugin |
Gitops - Continuous Deployment | β |
ESO |
external-secrets-operator integrates external secret management systems like Bitwarden or GitLab | β |
Bitwarden ESO Provider |
Bitwarden external-secrets-operator provider | β |
ZITADEL |
An identity provider and OIDC provider to provide SSO | β |
Vouch |
Vouch proxy allows you to secure web pages that lack authentication e.g. prometheus | β |
Prometheus Stack |
Prometheus monitoring and logging stack using loki/promtail, alert manager, and grafana | β |
For a complete list of installable applications, checkout the default apps docs. To install your own custom apps, you can check out an example via the config file or learn how to do it via the tui.
Somewhat stable and being actively supported, so if you'd like to contribute or just found a π, feel free to open an issue (and/or pull request), and we'll try to take a look ASAP!