This project demonstrates a comprehensive end-to-end DevOps approach for a simple Python Flask application that interacts with a MySQL database. It showcases how to build, deploy, and manage the application using tools like Docker, Kubernetes, AWS EKS and RDS, CI/CD pipelines, and monitoring systems.
Before you begin, ensure you have met the following requirements:
- Python
- MySQL For Ubuntu | MySQL For Windows
- AWS CLI configured with appropriate permissions
- Docker installed and configured
- kubectl installed and configured to interact with your Kubernetes cluster
- Terraform
- Helm
- GitHub_CLI
- K9s
- Beekeeper-Studio
For Database Access
Development:
- Python: The core programming language for the application.
- Flask: The web framework used to build the Python application.
- MySQL: The relational database management system used to store application data.
- Docker: Used for containerizing the application and database for portability and consistency.
- Docker Compose: Used to manage the multi-container environment for local development.
Deployment and Infrastructure:
- AWS EKS: The managed Kubernetes service on AWS for deploying and managing the application.
- AWS RDS: The managed relational database service on AWS for hosting the MySQL database.
- AWS ECR: The managed container registry on AWS for storing Docker images.
- Terraform: Used for infrastructure as code, automating the creation and management of AWS resources.
- Kubernetes: The container orchestration platform used to deploy and manage the application on EKS.
CI/CD and Automation:
- GitHub Actions: The CI/CD platform used to automate the build, test, and deployment process.
- GitHub CLI: Used to interact with GitHub repositories and manage secrets.
Monitoring and Observability:
- Prometheus: The open-source monitoring system used to collect metrics from the application and infrastructure.
- Grafana: The open-source dashboarding tool used to visualize and analyze metrics collected by Prometheus.
- Alert Manager: Used to configure alerts based on Prometheus metrics.
Other Tools:
- K9s: A terminal-based Kubernetes UI for interacting with the cluster.
- Beekeeper Studio: A database management tool for accessing and managing the MySQL database.
This project demonstrates a comprehensive set of tools and technologies commonly used in modern DevOps practices. It highlights the importance of automation, containerization, cloud infrastructure, and monitoring for building and managing reliable and scalable applications.
This section demonstrates how to run the application locally, providing a foundation for development and testing. This setup mirrors the production environment as much as possible.
-
Start your MySQL server.
-
Create a new MySQL database for the application.
-
Update the database configuration in
app.py
to match your local MySQL settings:- DB_HOST: localhost
- DB_USER: your MySQL username
- DB_PASSWORD: your MySQL password
- DB_DATABASE: your database name
-
Create a table in the database that will be used by your application
CREATE TABLE tasks ( id SERIAL PRIMARY KEY, title VARCHAR(255) NOT NULL, description TEXT, is_complete BOOLEAN DEFAULT false );
-
Clone the repository:
git clone https://github.com/vishalbansal28/End-to-end-DevOps-Python-MySQL cd End-to-end-DevOps-Python-MySQL/todo-app
-
Create a virtual environment and activate it:
python3 -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip3 install -r requirements.txt
-
Start the Flask application:
python3 app.py
-
Access the application at
http://localhost:5000
.
This section demonstrates how to run the application within a Docker container, providing a portable and consistent environment.
-
Build the Docker image:
docker build -t my-flask-app .
-
Run the Docker container with host network (to access the local MySQL server):
docker run --network=host my-flask-app
-
Access the application at
http://localhost:5000
.
This section demonstrates how to use Docker Compose to manage both the application and the database containers, simplifying the local development setup.
To run the application using docker compose:
docker-compose up
This will Run both the application and the database containers and will also create a table in the database using the sql script init-db.sql
To take it down run the following command:
docker-compose down
This section demonstrates the core of the DevOps approach, deploying the application to a production-ready environment on AWS EKS and RDS.
To build and deploy the application on AWS EKS and RDS execute the following script:
./build.sh
This will build the infrastructure, Deploy Monitoring Tools, and run some commands:
- EKS (Kubernetes Cluster)
- 2x ECR (Elastic Container Registry)
One for the App Image and one for the DB K8s Job
- RDS (Relational Database Service)
RDS Cluster with One Instance
- Generate and store RDS credentials into AWS Secret Manager
- VPC, Subnets and Network Configuration
- Monitoring Tools Deployment (Alert Manager, Prometheus, Grafana)
- Build and Push the Dockerfile for the Application and MySQL Kubernetes Job to the ECR
- Create Kubernetes Secrets with the RDS Credentials
- Create Namespace and Deploy the application and the Job
- Reveal the LoadBalancer URL for the application, alertmanager, prometheus and grafana
IMPORTANT: Make sure to update the Variables in the script
Once the build.sh
script is executed, you should see the URLs for all the deployed applications:
NOTE: For the alert-manager
use the port 9093
after the URL, and for prometheus
use port 9090
.
The application should look like this:
And once you add data should look like this:
You can Complete
or Delete
the task and this will take effect automatically in the database.
This section demonstrates how to access the database deployed on AWS RDS.
You will also get the database endpoint URL:
Use that URL and access the database using the following comand:
mysql -h RDS_ENDPOINT_URL -P 3306 -u root -p
NOTE: Once you run the command you will be asked for the database password, you have 2 methods to get the database password:
Open K9s
through the terminal, Ctrl+A
to navicate to the main screen /secrets
to search for the k8s secrets and inside the secrets /rds
to search for names that include rds
, you should get the following:
Over rds-password
tab x
so you can decode the encrypted password
Through AWS, navigate to Secret Manager
on AWS
Click on the created secret rds-cluster-secret
and from the Overview tab click on Retrieve secret value
This will show you the username and the generated password for the database.
Once you connected to the database throug the terminal you can run the following commands to check the data into the database:
show databases;
use DATABASE_NAME;
show tables;
select * from TABLE_NAME;
Choose the database MYSQL
, fill in the Host
, User
, Password
, make sue the port is 3306
then connect.
This section explains the CI/CD workflows used to automate the build, test, and deployment process.
This project is equipped with GitHub Actions workflows to automate the Continuous Integration (CI) and Continuous Deployment (CD) processes.
The CI workflow is triggered on pushes to the main
branch. It performs the following tasks:
- Checks out the code from the repository.
- Configures AWS credentials using secrets stored in the GitHub repository.
- Logs in to Amazon ECR.
- Builds the Docker image for the Python app.
- Builds the Docker image for MySQL Kubernetes job.
- Tags the images and pushes each one to the it's Amazon ECR repository.
The CD workflow is triggered upon the successful completion of the CI workflow. It performs the following tasks:
- Checks out the code from the repository.
- Configures AWS credentials using secrets stored in the GitHub repository.
- Sets up
kubectl
with the required Kubernetes version. - Deploys the Kubernetes manifests found in the
k8s
directory to the EKS cluster.
The following secrets need to be set in your GitHub repository for the workflows to function correctly:
AWS_ACCESS_KEY_ID
: Your AWS Access Key ID.AWS_SECRET_ACCESS_KEY
: Your AWS Secret Access Key.KUBECONFIG_SECRET
: Your Kubernetes config file encoded in base64.
Before using the GitHub Actions workflows, you need to set up the AWS credentials as secrets in your GitHub repository. The included github_secrets.sh
script automates the process of adding your AWS credentials to GitHub Secrets, which are then used by the workflows. To use this script:
-
Ensure you have the GitHub CLI (
gh
) installed and authenticated. -
Run the script with the following command:
./github_secrets.sh
This script will:
- Extract your AWS Access Key ID and Secret Access Key from your local AWS configuration.
- Use the GitHub CLI to set these as secrets in your GitHub repository.
Note: It's crucial to handle AWS credentials securely. The provided script is for demonstration purposes, and in a production environment, you should use a secure method to inject these credentials into your CI/CD pipeline.
These secrets are consumed by the GitHub Actions workflows to access your AWS resources and manage your Kubernetes cluster.
For the Continuous Deployment workflow to function properly, it requires access to your Kubernetes cluster. This access is granted through the KUBECONFIG
file. You need to add this file manually to your GitHub repository's secrets to ensure secure and proper deployment.
To add your KUBECONFIG
to GitHub Secrets, follow these steps:
-
Encode your
KUBECONFIG
file to a base64 string:cat ~/.kube/config | base64
-
Copy the encoded output to your clipboard.
-
Navigate to your GitHub repository on the web.
-
Go to
Settings
>Secrets
>New repository secret
. -
Name the secret
KUBECONFIG_SECRET
. -
Paste the base64-encoded
KUBECONFIG
data into the secret's value field. -
Click
Add secret
to save the new secret.
This KUBECONFIG_SECRET
is then used by the CD workflow to authenticate with your Kubernetes cluster and apply the required configurations.
Important: Be cautious with your KUBECONFIG
data as it provides administrative access to your Kubernetes cluster. Only store it in secure locations, and never expose it in logs or to unauthorized users.
This section explains how to tear down the deployed infrastructure.
In case you need to tear down the infrastructure and services that you have deployed, a script named destroy.sh
is provided in the repository. This script will:
- Log in to Amazon ECR.
- Delete the specified Docker image from the ECR repository.
- Delete the Kubernetes deployment and associated resources.
- Delete the Kubernetes namespace.
- Destroy the AWS resources created by Terraform.
-
Open the
destroy.sh
script. -
Ensure that the variables at the top of the script match your AWS and Kubernetes settings:
$1="ECR_REPOSITORY_NAME" $2="REGION"
-
Save the script and make it executable:
chmod +x destroy.sh
-
Run the script:
./destroy.sh
This script will execute another script ecr-img-delete.sh
which will delete all the images on the two ECR
to make sure the both ECR
are empty then terraform
commands to destroy all resources related to your deployment.
Once the terraform destroy
starts the RDS
will start creating a Snapshot
as a backup for the database, in that case the process of destroying will fail at some point.
The script will delete the created Snapshot
then run terraform destroy
again to make sure all resources are deleted.
aws rds delete-db-cluster-snapshot --db-cluster-snapshot-identifier $rds_snapshot_name --region $region
It is essential to verify that the script has completed successfully to ensure that all resources have been cleaned up and no unexpected costs are incurred.
Make sure to replace URLs, database configuration details, and any other specific instructions to fit your project. This README provides a basic guideline for users to set up and run your application both locally, with Docker, with Docker Compose, minikube and also over EKS and RDS.
Contributions are welcome! If you have any suggestions, improvements, or bug fixes, please feel free to open an issue or submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.