/k8s-example

Setup Kubernetes from scratch

Primary LanguageJinjaThe UnlicenseUnlicense

k8s-example

Setup Kubernetes cluster from scratch

1. Spinning Up Kubernetes Cluster

Prerequisites

  • An SSH key pair on your local machine1
  • Servers running CentOS 7 with at least 2GB RAM and 2 vCPUs each and you should be able to SSH into each server as the root user with your SSH key pair2
  • Ansible installed on your local machine3,4
  • Extarnal load balancer is provisioned
  • Domain name is provisioned (i.e. astakhoff.ru)
  • Common names are assigned to the public IP (external load balancer):
    • k8s.astakhoff.ru
    • grafana.k8s.astakhoff.ru
    • prometheus.k8s.astakhoff.ru
    • store.k8s.astakhoff.ru

Setting Up the Workspace Directory and Ansible

  • Setup a ./ansible/hosts.ini file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.
  • Define common names in ./ansible/vars/cnames.yml
  • (Optional) Define Ingress NodePorts in ./ansible/vars/main.yml5

Installing Kubernetes cluster

  • Execute the playbook:
    ansible-galaxy install kwoodson.yedit
    ansible-playbook -i hosts.ini main.yml

Checking Kubernetes cluster

kubectl get no -o wide

image

kubectl get po -A -o wide

image

Checking Ingress controller

kubectl get po -n ingress-nginx -o wide

image

kubectl get svc -n ingress-nginx

image

curl http://64.227.132.241:30080

image

curl http://k8s.astakhoff.ru

image

2. Deploying Services to Kubernetes Cluster

Install cert-manager6

  • Execute the playbook:
    ansible-playbook -i hosts.ini cert-manager.yml
  • Verify the installation7
    kubectl get pods --namespace cert-manager
    image

Deploy the sample app to the cluster

  • Deploy "Online Boutique" demo application8:
    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml
  • Wait for the Pods to be ready:
    watch kubectl get pods -o wide
    image
  • Deploy ingress resource for frontend service:
    ansible-playbook -i hosts.ini frontend-ingress-resource.yml
  • Access the web frontend using public IP of the worker node and ingress controller NodePort port:
    curl -kI -H 'Host: store.k8s.astakhoff.ru' https://64.227.136.238:30443
    image
  • Access the web frontend in a browser using external LoadBalancer's public IP: image

Deploy loadgenerator service using Helm via CI/CD

  • Update existing loadgenerator instance in order to control deployment via helm:

    kubectl -n default label deployment loadgenerator "app.kubernetes.io/managed-by=Helm"
    kubectl -n default annotate deployment loadgenerator "meta.helm.sh/release-name=loadgenerator" "meta.helm.sh/release-namespace=default"
  • Prepare kube config as kube config secret for CI/CD tool:

    cat $HOME/.kube/config | base64

    image

  • Push a commit into master branch of loadgenerator's source code or prepared Helm chart:

  • Verify GitHub Actions workflow runs: image

Note: GitHub Actions workflow configuration here: https://github.com/viastakhov/k8s-example/blob/main/.github/workflows/ci-cd.yaml

3. Monitoring Setup

Install OpenEBS local PV device storage engine9

  • Execute the playbook:
    ansible-playbook -i hosts.ini openebs.yml
  • Verify installation:
    • Verify pods:
      kubectl get pods -n openebs
      image
    • Verify StorageClasses:
      kubectl get sc
      image

Install Prometheus stack10

  • (Optional) Setup prometheus stack in ./ansible/vars/prom-stack.yml

  • Execute the playbook:

    ansible-playbook -i hosts.ini prometheus.yml
  • Verify installation:

    • Verify Prometheus related pods are installed under monitoring namespace:
      kubectl get pod -n monitoring
      image
    • Verify Prometheus related PVCs are created under monitoring namespace:
      kubectl get pvc -n monitoring
      image
    • Verify Prometheus related services created under monitoring namespace:
      kubectl get svc -n monitoring
      image
  • Import Grafana dashboards from /dashboard folder

  • There are following metrics being used:

    • Pod Resource Usage by Namespace:

      Metriс Purpose
      CPU Usage Detect CPU bottlenecks, setup CPU resource requests/limits and VerticalPodAutoscaler/HorizontalPodAutoscaler
      Memory Usage Detect high memory presure and leakage, setup memory resource requests/limits and VerticalPodAutoscaler/HorizontalPodAutoscaler

      image

    • Nodes Resourse Usage:

      Metriс Purpose
      CPU Usage Control memory usage on the node, add additional CPU cores on demand
      Load Average Control system load on the node, add additional CPU cores on demand
      Memory Usage Control memory utilization on the node, provide additional CPU cores on demand
      Disk I/O Detect disk I/O bottleneck
      Disk Space Usage Control disk space utilization
      Network Received Inspect inbound network traffic, monitors for reception of data that exceeds the bandwidth of the network interface
      Network Transmitted Inspect outbound network traffic on the interface

      image image

    • Persistent Volume Usage:

      Metriс Purpose
      Volume Space Usage Control volume space utilization by PVC
      Volume Inode Usage Control the number of inodes available on volume by PVC

      image

Note: Grafana URL: https://grafana.k8s.astakhoff.ru (login/password: guest/guest)

4. Logging Setup

Install Loki stack

  • Execute the playbook:
    ansible-playbook -i hosts.ini loki.yml
  • Wait for the Pods to be ready:
    kubectl get po -n monitoring -o wide --selector release=loki
    image

Pod logs inspection

Open "Loki Logs" dashboard in Grafana in order to review pod logs: image image

Note: Grafana URL: https://grafana.k8s.astakhoff.ru (login/password: guest/guest)

5. Pod Autoscaling

Install metrics-server11

  • Execute the playbook:
    ansible-playbook -i hosts.ini metrics-server.yml
  • Check whether the metrics-server is available and running
    kubectl get apiservices | grep metrics.k8s.io
    image

Setup pod autoscalling

  • Create HorizontalPodAutoscaler resource for frontend service:

    kubectl apply -f https://raw.githubusercontent.com/viastakhov/k8s-example/main/manifests/frontend-hpa.yaml
  • Verify created resource:

    kubectl get hpa

    image

  • Increase workload on frontend service:

    kubectl set env deployment/loadgenerator USERS=500
  • After a fiew minutes there are additional frontend pods created:

    kubectl get pod --selector app=frontend
    

    image

  • Decrease workload on frontend service:

    kubectl set env deployment/loadgenerator USERS=1
  • Watch for the terminating of several frontend pods:

    kubectl get pod --selector app=frontend -w
    

    image

    kubectl get pod --selector app=frontend
    

    image

6. Infrastructure Development Plan

IDP