Since 2018 I have been working on Hybrid Cloud solutions (One kubernetes cluster in different datacenters/cloud providers). First, I started to use my own Kubernetes bootstrap tools, but now I prefer to use Talos.
To implement the Hybrid Cloud solution, I contributed to the projects:
- Talos
- Talos CCM
- Proxmox CCM
- Proxmox CSI
- Terraform manual testing
- Openstack CCM/CSI
- Terraform plugins for different clouds
- Cloud Controller Manager for different clouds
I have experience (as a developer) in the following Kubernetes areas:
- Cloud Controller Manager (CCM)
- Conteiner storage interface (CSI)
- Custom Resource Definitions (CRD)
- Node autoscaling
- golang
- python
- ruby
- php
- asm
- c/c++
- pascal
- bash/sh
- terraform terraform examples
- ansible repositories
- pupper
I have been using Kubernetes since version 0.3. When Kubeadm was unstable, I created my own tools to bootstrap Kubernetes clusters. The first version was based on Puppet, and later I transitioned to using my custom solution ansible-role-kubernetes, the hard way. Now it is deprecated, and does not supported anymore.
Average bootstrap time on bare metal:
- control plane with etcd cluster - 15min
- worker nodes - 5min with + 2 reboots (for testing purpose)
Control plane installation types:
- systemd configs
- staticpod (kubelet yaml configs)
- deployments (kubernetes deploytments/daemonsets)
All Kubernetes certificates are generated by ansible with ABAC/RBAC policy. Creating a host firewall depends on CNI plugin (I prefer cilium as a CNI plugin now).
I have expertise in:
- hybrid/multi cluster, bare metal + cloud - Create a big cluster with one distributed Kubernetes control plane in different datacenters/cloud providers. Hybrid nodes - virtual machines and bare metal servers.
- kubernetes from scratch (ansible/puppet roles)
- cni - cilium, wavenet, kube-router, kilo
- fluent-bit/fluentd + plugin, loki, clickhouse
- grafana, prometheus, victoriametrics, custom exporters (golang, python)
- ingress-nginx, gloo, haproxy, traefic, skipper, ddos protection based on lua/modsec
- helm + sops, fluxcd, argocd, teamcity, github actions inside kubernetes
- external services exporter - Developers can route requests inside the Kubernetes cluster to the local machine.
Unattended installation system by CDROM-templates (pressets), pxe boot, prepared system images. Puppet roles + hiera. Auto servers discovery/inventory system. Linux kernel optimization. Numa balancing, IRQ affinity. XEN/KVM host virtualization. Device pass-through VT-d and VNFs. Lxc-container deployment system (like docker). Prebuild containers and lanch them in dev/prod environments. Privet cloud on Openstack. Openstack custom network plugins.
- linux auto install (automated installation)
- puppet + hiera + activemq (~60 modules + 2 ruby libs)
- ansible (~40 roles)
- PRs to foreman project
- virtualisation xen,kvm with vt-d and numa optimization.
- lxc with pre-built templates. Like the docker but only one layer. (deploy system)
- proxmox integrations for kubernetes.
- openstack from scratch using puppet + one network plugin.
- AWS, Azure, GCP, Oracle, Digitalocean, Hetzner, Ovh, Scaleway, Upcloud and ~10 other clouds
- terraform + plugins
- talos
- debian + build deb packages
- ubuntu
- coreos for jenkins workers
- sles as VM hypervisor
- centos
- openbsd
- freebsd
- openwrt (custom firmware)
I managed distributed DNS clusters across different datacenters, utilizing L3 Cisco switches with access policies to protect production environments from development clusters. I implemented port mirroring for analytics and conducted load tests based on real user requests. Additionally, I used BGP within datacenters for efficient load balancing.
- l7 ddos protection
- firewalls - iptables + ipsets, pf
- cisco switces - acls, vlans, port channels
- bgp - bird for load balancing
- bind9, powerdns - primary/secondary/geo view
- soft gateways - linux/openbsd/openwrt, multi wan lb
- openvpn, ipsec, wireguard
- postgres + walg/barman
- clickhouse
- redis, keyDB + walg
- mongodb + walg
- rabbitmq
- influxdb
- memcache
Self-hosted github actions in Kubernetes. Workers have docker cache registry and distributed docker build cluster. A free version of TeamCity in Kubernetes (3 agents). All builds run in docker. Teamcity agents have limited utils and docker/nerdctl.
To reach CI/CD agnostic solution I use Makefile on top of the repository. CI/CD runs only make commands with parameters. It allows me to change CI/CD solutions very easily.
Most popular tools:
- teamcity (optional deploy prepared containers)
- github actions (build and test code)
- jenkins (distributed cluster with matrix tests)
- makefile
- dockerfile + buildkit
I set up our own testnets for Bitcoin, Ethereum, and Waves to facilitate integration testing within CI/CD pipelines. I launched a distributed cryptocurrency network in production environments across multiple countries, leveraging Kubernetes and Helm for deployments. This setup included Prometheus exporters, Grafana dashboards, and alerts, similar to Infura's infrastructure.
Additionally, I developed and deployed a smart contract on the Ethereum network.
Experience in production env:
- ethereum
- bitcoin
- waves
- ergo
- work time accounting system
- Network gateways, nat, web proxy with filtering, website blocker
- openbsd as router (read only root fs)
- primary/secondary dns (bind9)
- squid + filtering
- mail server (sendmail + sasl, sendmail filters m4) + pop3/imap server (dovecot)
- tftp/dhcp boot + unattended windows install (it takes about 30 minute to full preparation windows workstation, no system administrator required)
- office workstations (based on ubuntu)
- automation external management for linux like puppet/chef but uses track (python) and python daemons on the workstations. Daemons receive the jobs from the Track system, launch it and put the result to the issue.
- microchip PIC (16 bit) home automatisation (asm)
- network hardware 10Mbit bandwidth, error rate, packet collisions (windows application uses libpcap)
- home ISP, gateways, firewalls, traffic billing, pptp/pppoe server (for windows clients)
- high performed file server (samba with optimisation) + journal file system (samba virtual file system) and business logic around it. (freebsd)
- dos game like arkanoid (pascal + asm injections)