Install a Kubernetes cluster with Ansible on any infrastructure. Have a vanilla, almost-production-ready cluster in no time!
This project aims to provide an automated way of deploying a Kubernetes cluster that isn't configured. This means that configuration and host preparations lies in the hands of the user executing the playbooks. After a successfull install you will have a cluster with:
- containerd as container runtime
- runc for managing containers
- cni plugins
- one or more etcd instances
- one or more masters (kube-api, kube-controller-manager, kube-scheduler) installed as services managed by systemd.
- one or more nodes (kubelet)
- a certificate authority for etcd
- a certificate authority for kube components
What you will not have:
- Host-level validation and pre-flight checks
- DNS plugin
- Pod networking
- Ingress
- Node roles
As always, it's highly recommended that you verify that your hosting environment meets any requirements before installing Kubernetes and its components.
On the control host
- Ansible >= 2.4
- Python 2.7
- OpenSSL
On each host in the cluster
- Python 2.7
- ca-certificates
There are a few variables that you may set to further customize the deployment.
Name | Required | Default | Description |
---|---|---|---|
config_path |
False |
~/.ktrw |
A path to a directory on the control host where cluster certificates and configuration is created. |
cluster_hostname |
False |
groups['masters'][0] |
The public hostname of the cluster. Defaults to the hostname of the first master in the inventory. For multi-master installations, the value of cluster_hostname is usually a load balancer. |
cluster_port |
False |
6443 |
The port number on which kube-apiserver listens on. |
cluster_name |
False |
cluster_hostname.split('.')[0] |
The name of the cluster, used for identification in kubectl . Defaults to the first segment of the cluster_hostname . |
cluster_cidr |
False |
10.19.0.0/16 |
CIDR Range for Pods in cluster. This effectively sets the --cluster-cidr flag on kube-controller-manager . |
regenerate_certificates |
False |
False |
Set to True to force create certificates. This will overwrite existing certificates. |
regenerate_keys |
False |
False |
Set to True to force create private certificates (keys). This will overwrite existing certificates. |
Configure an Ansible inventory file with the host groups etcd
, masters
and nodes
and assign any host to their respective groups. Have a look at the examples. After you've defined an inventory, run the install.yml
playbook.
Note! If you plan on using flannel
in your cluster, you must set cluster_cidr=10.244.0.0/16
in the inventory.
ansible-playbook -i inventory install.yml
After installation, you will have a bare minimum cluster. This means no cluster network or DNS. Refer to the kubernetes docs for more info. The choice is up to you. If you're not sure which ones to use, just stick with flannel
and CoreDNS
and you'll be fine.
Deploy flannel
onto the cluster
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Deploy CoreDNS
onto the cluster
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
To remove a cluster run the cleanup.yml
playbook.
ansible-playbook -i inventory cleanup.yml
During installation private certificates, public certificates and configuration are generated on the control host, the host that executes the playbook. They are then copied to the hosts during installation but are kept on the control host in ~/.ktrw/
. This way, the cluster can be safely removed and re-installed without having to regenerate the cluster certificates. You may set which directory to store certs and config locally using the config_path
variable.
To add a node to an existing cluster is as easy as adding it to the inventory and running install.yml
again.
Name | Version | Role |
---|---|---|
cni | 0.6.0 | node |
containerd | 1.2.1 | node |
etcd | 3.3.9 | etcd |
kube-apiserver | 1.13.1 | master |
kube-controller-manager | 1.13.1 | master |
kube-scheduler | 1.13.1 | master |
kube-proxy | 1.13.1 | node |
kubelet | 1.13.1 | node |
kubectl | 1.13.1 | controller |
runc | 1.0.0-rc6 | node |
This project is MIT licensed and accepts contributions via GitHub pull requests.