/k8s-stack

Demo project with a possible implementation of a k8s stack using ArgoCD, KNative, Istio and Helm

Primary LanguageGo

k8s stack

Table of Contents

Overview

This project has a simple demo on how to setup a serverless kubernetes cluster with a GitOps approach with the following tools:

A short overview of the k8s cluster setup is presented below:

Kubernetes Architecture Overview

The CI pipeline builds the project and deploys the application to the cluster using ArgoCD. The application will be deployed to a single k8s cluster (for demo purposes) to a different namespace depending on the triggering branch.

  • The main branch will deploy to the prod namespace
  • The develop branch will deploy to the staging namespace

A short diagram on how the CI/CD project will be structured is presented below (for the sake of the demo, the infrastructure deployment manifests are in the same repo as the application code but in a real world scenario they would be in a separate repo):

CI/CD Pipeline Diagram

Prerequisites

At the moment of this setup this was tested with the following versions:

  • Docker: 20.10.14
  • Kind: v0.19.0 go1.20.4 darwin/arm64 and kindest/node:v1.27.1
  • Kubectl: 1.21.9
  • Istioctl: 1.17.2
  • Helm: 3.10.2

Setup

Manual

Please follow the below steps to start the kind local cluster with Istio and Knative installed.

  1. Create the kind cluster
kind create cluster --name k8s-demo --config "./configs/kind-cluster.yaml"

Once the cluster is created you need to setup the kubectl context with the following command:

kubectl cluster-info --context kind-k8s-demo
  1. Install Istio
istioctl install -f ./configs/istio.yaml -y
  1. Install Knative
## Installing required custom resources for knative serving component on the created kind cluster
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.10.1/serving-crds.yaml

## Installing the core components of knative serving on the created kind cluster
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.10.1/serving-core.yaml

## Installing the knative Istio controller on the created kind cluster
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.10.0/net-istio.yaml

## Configuring DNS to use sslip.io - sslip.io provides a wildcard DNS setup that will automatically resolve to the IP address you put in front of sslip.io.
kubectl patch configmap/config-domain --namespace knative-serving --type merge --patch '{"data":{"127.0.0.1.nip.io":""}}'

## Configure mTLS for knative serving which secures service-to-service communication within the cluster
kubectl label namespace knative-serving istio-injection=enabled
kubectl apply -f configs/knative-mtls-config.yaml
  1. Creating the namespaces that we will use for our applications
## Create namespaces
kubectl create namespace staging
kubectl create namespace prod
kubectl label namespace staging istio-injection=enabled
kubectl label namespace prod istio-injection=enabled
  1. Install ArgoCD using Helm

Before we install the ArgoCD chart, we need to generate a Chart.lock file for Argo. We do this so that our dependency (the original argo-cd chart) can be rebuilt. This is important later when we let Argo CD manage this chart to avoid errors. We can do this by running the following commands:

helm repo add argo https://argoproj.github.io/argo-helm
helm dep update argocd/helm

We also need to create the namespace where we will install ArgoCD:

kubectl create namespace argocd

Now we can install ArgoCD with the following command:

helm install argo-cd argocd/helm --namespace argocd

Please note that this might take some time to setup, so please be patient until all of the services are up and running before proceeding.

Once Argo is installed you can forward the serve port with the following command:

kubectl port-forward svc/argo-cd-argocd-server -n argocd 8080:443

You can now access the ArgoCD UI with the following URL: https://localhost:8080

Notes:

The default username for ArgoCD is admin. The password is auto-generated and we can get it with the following command:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
  1. Finally, we can configure ArgoCD to manage the cluster for the argo, staging and prod namespaces.

First, we configure the argo central application:

helm template argocd/apps/argocd/ --namespace argocd | kubectl apply -f -

After a few moments, the argo application should be visible as running in the ArgoCD UI:

ArgoCD Application

Once argo is configured and visible within the UI, we can configure the staging and prod namespaces:

helm template argocd/apps/staging/ --namespace argocd | kubectl apply -f -
helm template argocd/apps/prod/ --namespace argocd | kubectl apply -f -

All applications should now be visible in the ArgoCD UI, and, from now on, all additional apps that we add for both staging or prod, will be automatically synced by Argo.

Overview of all applications:

ArgoCD Applications

Example for the staging application:

  • Overview of the Staging root app:

ArgoCD Staging Application

  • Status of the apps of the Staging root app:

ArgoCD Staging Application Apps

  • Overview of the Postgres staging app and all dependencies that were deployed:

ArgoCD Staging Postgres App

  • Overview of the Golang staging app and all dependencies that were deployed:

ArgoCD Staging Golang App

Script

  • Alternatively you can use any of the scripts located under the scripts folder to setup the cluster, setup ArgoCD and delete the cluster.

Example to setup the cluster:

./scripts/1-setup-cluster.sh
./scripts/2-setup-argo.sh
./scripts/3-configure-argo-sync.sh

Example to delete the cluster:

./scripts/4-delete-cluster.sh

Usage

Once the cluster is up and running and both the staging and prod applications are configured, you can test the application with the following commands:

  • For staging:

Checking the KNative url that was created for the service

kubectl get ksvc -n staging

Getting the list of configured todos

curl -X GET http://todo-api-staging.staging.127.0.0.1.nip.io/api/v1/todos
  • For prod:

Checking the KNative url that was created for the service

kubectl get ksvc -n prod

Getting the list of configured todos

curl -X GET http://todo-api-prod.prod.127.0.0.1.nip.io/api/v1/todos

References