Capstone project for Udacity's Cloud DevOps Nanodegree
We will prepare a small API that behaves as a database client. While our DB is completely secure inside the cluster, we can POST queries to the API that will be executed agains the database and give back the results.
Kubernetes
The smallest unit available in Kubernetes is the Pod, the basic building block:
Pods are the smallest deployable units of computing that can be created and managed in Kubernetes.
In other words, Pods are units that encapsulate the elements of the application that must work together. Therefore, when scaling the application - increasing / decreasing the number of Pods - , all the elements inside the Pod will equally scale. Moreover, containers inside the same Pod can easily communicate between them, being isolated from anything else that could be placed outside of their world.
However, by definition Pods are not meant to be reliable units. Yes, they have their own IP addresses and we could directly communicate to any of them, but this is discouraged in a production environment as they are ephemeral. Depending on the application load, new Pods may appear or others may crash, so we instead want to communicate to our application in a greater level of abstraction. In order to achieve this, Kubernetes brings us two resources: Workload API Objects and Services.
This is an API Object that helps to manage the scaling of Pods.
Replica Sets ensure that a specified number of pod replicas are running at any given time.
Based on a given template and specs - such as specs.replicas = 3
, Replica Sets create Pods to manage. However, a Replica Set may also manage Pods that were not created by it, by specifying a Selector, that will be used to match any pod with that given label.
However, this API lacks the ability to perform updates. That's why we need Deployments.
Deployment encapsulate both Replica Sets and Pods, providing a declarative method of update their state: kubectl
. This adds another layer of abstraction to managing Kubernetes:
User [interface] -> Deployment -> Replica Set -> Pod
Through the kubectl
interface, the Deployment will check the current status of the cluster and make it match the desired state specified by the user.
Finally, we will introduce another Kubernetes framewoek called Stateful Set, used to manage stateful applications such as databases. In this link I found the best definition of them:
StatefulSets provides the capabilities of stable unique network hostnames and stable dedicated network storage volume mappings, essential for a database cluster to function properly and for data to exist and outlive the lifetime of inherently ephemeral containers.
As explained in the docs, PersistentVolume
s are pieces of storage that will be requested through PersistentVolumeClaim
s.
For any quick modifications to app.py
, we can test locally by running a postgres Docker container:
- Run docker Postgres:
docker run -p 5432:5432 --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
- Test to run a query with
sh post_query.sh localhost 5000 "<my_query>"
- Configuration file for the application and scripts, can be found in
scripts/config.sh
. - Scripts are prepared to be run from the main repo directory.
Note that disabled
ignore-modules=flask_sqlalchemy
onpylintrc
to disableE1101
errors due to methods only being available at runtime. Those can't be picked up by pylint and thus it throws ghost errors.
To test the application in Kubernetes (e.g., minikube
), first get the IP of the cluster with minikube ip
and run the following commands:
sh scripts/deploy.sh
sh scripts/post_query.sh 192.168.99.100 31234 "create table account (id_user serial PRIMARY KEY, username VARCHAR(50) NOT NULL)"
should returnOK
.sh scripts/post_query.sh 192.168.99.100 31234 "insert into account (username) values ('pmbrull')"
should returnOK
.sh scripts/post_query.sh 192.168.99.100 31234 "select * from account"
should return
{
"result": [
{
"id_user": 1,
"username": "pmbrull"
}
]
}
We enabled a rolling update on the flask app deployment to ensure that we always have available instances, even if they still do not have the new code versions. To do so, in kubernetes/flask-deployment.yaml
:
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
maxSurge: 1
We have two replicas and at maximum, only one will be down updating the images.
In the Jenkins EC2 instance we need to install:
- awscli and configure with the
kops
user.
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
-
kops - to create the cluster and deploy our app.
-
kubectl - to interact with the cluster-
-
Register a domain in Route 53. For that, I registered
pmbrull-k8.com
so that I can reach my cluster by name. -
Create an S3 bucket to store the cluster state & enable versioning.
-
Export cluster variables
export NAME=pmbrull-k8.com
export KOPS_STATE_STORE=s3://pmbrull-k8-com-state-store
- Create ssh key for the server if we don't have one:
ssh-keygen
. - SSH public key must be specified when running with AWS (create with
kops create secret --name pmbrull-k8.com sshpublickey admin -i ~/.ssh/id_rsa.pub
) - Then, create the kubernetes cluster with
kops
:
kops create cluster \
--zones us-west-2a \
${NAME}
- Now we can review the cluster configuration.
- Finally, build the cluster:
kops update cluster ${NAME} --yes
- It will need a few minutes to validate the DNS from the created Route53 domain. Then, we can validate the cluster:
kops validate cluster
. - Handshake Jenkins server with the cluster so that we can execute
kubectl
commands directly topmbrull-k8.com
:
sudo mkdir -p /var/lib/jenkins/.kube
sudo cp ~/.kube/config /var/lib/jenkins/.kube/
cd /var/lib/jenkins/.kube/
sudo chown jenkins:jenkins config
sudo chmod 750 config
- In Jenkins, store the Docker Hub password as
Secret text
to upload the image. - Once we're all finished, delete the cluster
kops delete cluster --name=${NAME} --state=${KOPS_STATE_STORE} --yes
Fix docker agent in Jenkinsfile permission error:
sudo usermod -aG docker Jenkins
- kops-validate-cluster.png: Shows that after creating the cluster, everything is correctly set and available through the Jenkins server.
- kops-cluster.png: To check that we do have instances running as master and nodes.
- jenkins-success.png: With all the steps of the pipeline succeeding.
- lint-step.png: With the details of the linting step, to check both Python and Dockerfiles.
- k8s-services: With the result of the correct deployment of the kubernetes templates.
- load-balancer-service: Showing the details of the flask service created as a Load Balancer.
- test-pmbrull-k8: Testing the deployed application.