CI/CD pipeline for micro services applications with blue/green deployment
- Github Repository: holds all the source code of the web application, the Dockerfile, the Jenkinsfile, the ansible playbook, and the CloudFormation stack files
- Jenkins Multibranch Pipeline: Jenkins is set up to process the branches "master", "blue", and "green" in the GitHub repository. Deployments to the Kubernetes/EKS cluster are only performed for branches "blue" and "green".
- Web Application: is based on the Python framework "Flask". The website is served by Python on port TCP/5000.
- Linting of Code: hadolint is used to lint the Dockerfile. pylint is used to lint the Python code.
- Kubernetes Cluster: ansible is used to spin up an Amazon EKS cluster with 3 worker nodes (EC2 instances) by executing CloudFormation stacks for creation of VPC, EKS Cluster, and EKS Nodegroup (Kubernetes worker nodes). Each of the 3 worker nodes is in a different availability zone (AZ), e.g. us-west-2a, us-west-2b, and us-west-2c. The worker nodes are part of an Autoscaling Group (ASG) that can scale up to 5 worker nodes.
- Docker Image: the web app is dockerized. The Docker image is pushed to Docker Hub itsecat/flask-app
- Application Deployment: kubectl command is used as part of the Jenkins pipeline to create
- Blue deployment comprises blue load balancer (ELB), blue Kubernetes deployment, and blue Kubernetes service
- Green deployment comproses green load balancer (ELB), green Kubernetes deployment, and green Kubernetes service
- cloudformation: YAML files describing the stacks for spinning up the EKS cluster. These files are read by ansible and are sent towards CloudFormation.
- eks-cluster.yml: creates resource AWS::EKS::Cluster (EKS control plane)
- eks-nodegroup.yml: creates resource AWS::EKS::Nodegroup (EKS worker nodes)
- vpc.yml: creates all AWS networking resources required for the EKS cluster
- kubernetes: contains the file "flask-app.yml" that describes the Kubernetes deployment and Kubernetes service
- the_app: contains all the Python, HTML, and CSS files that are part of the Flask web application
- vars: contains main.yml for defining and setting variables for the ansible playbook main.yml
- Dockerfile: describes how to containerize the demo web application
- Jenkinsfile: describes the declarative Jenkins pipeline for building the Kubernetes/EKS infrastructure as well as linting, pushing, and deploying the application
- ansible.cfg: ansible configuration file
- delete.yml: used for deleting all AWS resources that have been created for EKS Cluster, EKS Nodegroup, and networking. Execute
ansible-playbook -i inventory delete.yml
- inventory: ansible file holding information about hosts and connections
- main.yml: YAML file describing the ansible playbook that is used for creating all AWS resources that are necessary for EKS Cluster, EKS Nodegroup, and networking. Execute
ansible-playbook -i inventory main.yml
- requirements.txt: defines Python modules that are required for linting and running the Flask web application
An EC2 Ubuntu 18.04.4 LTS instance was used to install Jenkins and the CloudBees Credentials plugin. An IAM policy "JenkinsMinimumSecurityModel" was assigned to an IAM role for the Jenkins EC2 instance in order to allow access to other AWS services like CloudFormation, EKS, or EC2/ELB.
hadolint was installed manually on the system.
wget https://github.com/hadolint/hadolint/releases/download/v1.18.0/hadolint-Linux-x86_64
mv hadolint-Linux-x86_64 hadolint
chmod +x hadolint
sudo install hadolint /usr/local/bin/
pylint will be automatically installed into a Python 3.6 venv (virtual enviroment) during execution of pipeline. pylint is listed in the requirements.txt file.
Docker was installed manually on the system. The user "jenkins" was added to group "docker" to allow building of docker images.
$ sudo apt-get update
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
$ sudo usermod -aG docker $USER && newgrp docker
$ docker run hello-world
The credentials for the Docker Hub account have been added to Jenkins credential store. Username and password are fetched from the store and are inserted into the docker login command via Jenkins environment variables provided by the withCredentials clause.
boto3 and aws CLI v2 have been installed manually on the system.
sudo apt install python-boto3
boto3 is using the AWS credentials that have to be set up manually by running aws configure
as user 'jenkins' sudo su - jenkins
. ansible-playbook uses boto3 to send CloudFormation YAML files to AWS CloudFormation for executing stacks for EKS Cluster, EKS Nodegroup, and VPC network infrastructure. The creations of resources can take up to 20 minutes.
Tool kubectl was installed manually on the system.
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
kubectl is used in this step of the Jenkins pipeline for deleting (if any) and creating blue or green (depending on git branch) deployments and services on Kubernetes (EKS). aws eks command is used as part of the pipeline to configure kubectl to access the EKS control plane endpoint:
aws eks --region us-west-2 update-kubeconfig --name eks-example --kubeconfig "$HOME/.kube/eks-example"
export KUBECONFIG="$HOME/.kube/eks-example"
kubectl delete service flaskapp-blue
kubectl delete deployments flaskapp-blue
kubectl apply -f kubernetes/flask-app-blue.yml
Note: if not running Jenkins in an EC2 instance without proper permissions, the tool aws-iam-authenticator would also have to be installed alongside kubectl.
The publicly accessible URLs that are exposed by the load balancers for blue and green deployment can be queried with kubectl:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flaskapp-blue LoadBalancer 10.100.182.119 ae8ca2a93345b4777b2010e91e99330e-1439484598.us-west-2.elb.amazonaws.com 5000:31632/TCP 3h33m
flaskapp-green LoadBalancer 10.100.24.235 a825ab26062f74aad82f3f6baf277715-764107461.us-west-2.elb.amazonaws.com 5000:30432/TCP 3h46m
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 12h
In my case the URLs are, after having run the pipeline for blue and green branches:
In order to have one URL for the end-user to access, a Route53 domain could be registered and an A record added for pointing to the ELB.
Blue and green deployments can be removed with these commands:
kubectl delete service flaskapp-blue
kubectl delete deployments flaskapp-blue
kubectl delete service flaskapp-green
kubectl delete deployments flaskapp-green
The EKS cluster infrastructure can be terminated by running this command:
ansible-playbook -i inventory delete.yml