This is based on the great work that https://github.com/itwars done with Ansible, all I left to do is to put it all together with terraform and Proxmox!
- The deployment environment must have Ansible 2.4.0+
- Terraform installed
- Proxmox server
for updated documentation check out my medium.
This setup is relaying on cloud-init images.
Using cloud-init image save us a lot of time and it's work great! I use ubuntu jammy image, you can use whatever distro you like.
to configure the cloud-init image you will need to connect to a Linux server and run the following:
install image tools on the server (you will need another server, these tools cannot be installed on Proxmox)
apt-get install libguestfs-tools
Get the image that you would like to work with. you can browse to https://cloud-images.ubuntu.com and select any other version that you would like to work with. for Debian, got to https://cloud.debian.org/images/cloud/. it can also work for centos (R.I.P)
wget https://cloud-images.ubuntu.com/jammy/current/current/jammy-server-cloudimg-amd64.img
update the image and install Proxmox agent - this is a must if we want terraform to work properly. it can take a minute to add the package to the image.
virt-customize jammy-template-amd64.img --install qemu-guest-agent
now that we have the image, we need to move it to the Proxmox server.
we can do that by using scp
scp jammy-server-cloudimg-amd64.img Proxmox_username@Proxmox_host:/path_on_Proxmox/jammy-server-cloudimg-amd64.img
so now we should have the image configured and on our Proxmox server. let's start creating the VM
qm create 9000 --name "jammy-template" --memory 2048 --net0 virtio,bridge=vmbr0
for ubuntu images, rename the image suffix
mv jammy-template-amd64.img jammy-template-amd64.qcow2
import the disk to the VM
qm importdisk 9000 jammy-template-amd64.qcow2 local-lvm
configure the VM to use the new image
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
add cloud-init image to the VM
qm set 9000 --ide2 local-lvm:cloudinit
set the VM to boot from the cloud-init disk:
qm set 9000 --boot c --bootdisk scsi0
update the serial on the VM
qm set 9000 --serial0 socket --vga serial0
Good! so we are almost done with the image. now we can configure our base configuration for the image. you can connect to the Proxmox server and go to your VM and look on the cloud-init tab, here you will find some more parameters that we will need to change.
you will need to change the user name, password, and add the ssh public key so we can connect to the VM later using Ansible and terraform.
update the variables and click on Regenerate Image
Great! so now we can convert the VM to a template and start working with terraform.
qm template 9000
our terraform file also creates a dynamic host file for Ansible, so we need to create the files first
cp -R inventory/sample inventory/my-cluster
Rename the file terraform/vars.sample
to terraform/vars.tf
and update all the vars.
there you can select how many nodes would you like to have on your cluster and configure the name of the base image.
to run the Terrafom, you will need to cd into terraform
and run:
terraform init
terraform plan
terraform apply
it can take some time to create the servers on Proxmox but you can monitor them over Proxmox. it shoul look like this now:
First, update the var file in inventory/my-cluster/group_vars/all.yml
and update the user name that you're selected in the cloud-init setup.
after you run the Terrafom file, your file should look like this:
[master]
192.168.1.200 Ansible_ssh_private_key_file=~/.ssh/rober
[node]
192.168.1.201 Ansible_ssh_private_key_file=~/.ssh/rober
192.168.1.202 Ansible_ssh_private_key_file=~/.ssh/rober
192.168.1.203 Ansible_ssh_private_key_file=~/.ssh/rober
192.168.1.204 Ansible_ssh_private_key_file=~/.ssh/rober
[k3s_cluster:children]
master
node
Start provisioning of the cluster using the following command:
Ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
To get access to your Kubernetes cluster just
scp debian@master_ip:~/.kube/config ~/.kube/config