In this Flask app, we containerized and deployed a Flask API to a Kubernetes cluster using Docker, AWS EKS, CodePipeline, and CodeBuild.
This app consists of a simple API with three endpoints:
GET '/'
: This is a simple health check, which returns the response 'Healthy'.POST '/auth'
: This takes a email and password as json arguments and returns a JWT based on a custom secret.GET '/contents'
: This requires a valid JWT, and returns the un-encrpyted contents of that token.
The app relies on a secret set as the environment variable JWT_SECRET
to produce a JWT.
The built-in Flask server is adequate for local development, but not production, so you will be using the production-ready Gunicorn server when deploying the app.
- Docker Engine
- AWS Account
-
Write a Dockerfile
-
Build and test the container locally
-
Create an EKS cluster
eksctl create cluster --name <cluster-name>
- Set following enviornment variables:
export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }"
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:Describe*", "ssm:GetParameters" ], "Resource": "*" } ] }' > /tmp/iam-role-policy
--> created a policy documentaws iam put-role-policy --role-name UdacityFlaskDeployCBKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy
--> attach the policy toUdacityFlaskDeployCBKubectlRole
role - Get the current config map and save it to a file:
kubectl get -n kube-system configmap/aws-auth -o yaml > /tmp/aws-auth-patch.yml
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
-
Store a secret using AWS Parameter Store
- Added the env with parameter store in the buildspec.yaml
- aws ssm put-parameter --name JWT_SECRET --value "YourJWTSecret" --type SecureString --> put secret to AWS parameter store
-
Create a CodePipeline triggered by GitHub check ins
- Modified the ci-cd-codepipeline.cfn.yml file with the GitHub and Cluster information.
-
Create a CodeBuild stage which will build, test, and deploy your code
- Created a stack in cloud formation using the file ci-cd-codepipeline.cfn.yml.
kubectl get services <cluster-name> -o wide
--> used to get the external ip for service.
- export TOKEN=
curl -d '{"email":"<EMAIL>","password":"<PASSWORD>"}' -H "Content-Type: application/json" -X POST localhost:8080/auth | jq -r '.token'
- curl --request GET 'http://127.0.0.1:8080/contents' -H "Authorization: Bearer ${TOKEN}" | jq .