This project demonstrates the deployment of a dynamic web application on AWS using Docker, Amazon ECR (Elastic Container Registry), and Amazon ECS (Elastic Container Service). The deployment leverages a 3-tier architecture with public and private subnets across two availability zones to ensure high availability and fault tolerance.
Before you start, ensure you have the following tools installed:
- Git: Download Git
- Visual Studio Code: Download Visual Studio Code
- Docker: Install Docker
- AWS CLI: Install AWS CLI
- Flyway: Download Flyway
-
Virtual Private Cloud (VPC):
- Description: A 3-Tier VPC is established with Public Subnets, Private App Subnets, and Private Data Subnets across 2 availability zones. This segregation enhances security and organizes different components.
-
Public Subnets:
- Purpose: Host infrastructure components such as the NAT Gateway and Application Load Balancer.
-
Internet Gateway:
- Purpose: Enables communication between instances in the VPC and the internet.
-
Private Subnets:
- Purpose: Host the web server designed to serve web pages and applications securely over the internet.
-
EC2 Instances:
- Usage: Host the WordPress website, accessible via an EC2 Instance Connect Endpoint.
-
Bastion Host:
- Purpose: Assists in migrating data into the RDS database while providing perimeter access and control security.
-
AWS Fargate:
- Purpose: Allows running containers without managing servers or clusters.
-
S3 Bucket:
- Purpose: Provides environmental file storage.
-
Application Load Balancer:
- Purpose: Distributes web traffic across an Auto Scaling Group of EC2 instances in two availability zones to ensure high availability and fault tolerance.
-
Availability Zones:
- Purpose: Ensures high availability and fault tolerance by deploying resources across multiple zones.
-
Resources:
- Description: NAT Gateway, Bastion Host, and Application Load Balancer are deployed in Public Subnets.
-
Auto Scaling Group:
- Purpose: Dynamically manages EC2 instances to ensure scalability, fault tolerance, and elasticity.
-
Route 53:
- Purpose: Manages domain name registration and DNS records.
-
Docker:
- Purpose: Dockerfile is used to build Docker images with Build Arguments and Environment Variables to handle secrets securely. This setup allows local image building and pushing to Amazon ECR.
-
Security Groups:
- Purpose: Acts as a network firewall to control traffic.
-
Instances:
- Access: Configured to access the internet via the NAT Gateway, even when in private subnets.
-
GitHub:
- Purpose: Utilized for version control and collaboration, storing web files.
-
Git:
- Purpose: Manages a
.gitignore
file to prevent the Dockerfile from being committed to GitHub.
- Purpose: Manages a
-
Certificate Manager:
- Purpose: Manages SSL/TLS certificates for secure application communications.
-
SNS (Simple Notification Service):
- Purpose: Configured to alert about activities within the Auto Scaling Group.
-
EFS (Elastic File System):
- Purpose: Provides a shared file system.
-
RDS (Relational Database Service):
- Purpose: Manages the database.
-
IAM Roles:
- Purpose: Allows for authentication with AWS using secret access keys and access key IDs to push the container image to ECR.
-
Flyway:
- Purpose: Organizes and securely migrates database scripts via SSH Tunnel into the MySQL RDS.
-
Create a VPC:
- Establish a VPC with public and private subnets across two availability zones.
- Enable DNS hostnames within the VPC.
-
Set Up Internet Gateway:
- Attach the Internet Gateway to the VPC.
-
Create Subnets:
- Define public and private subnets for the availability zones.
- Enable auto-assign IP settings for the public subnets.
-
Create Route Table:
- Add a route to direct network traffic to the Internet Gateway.
- Associate the route table with the subnets.
-
Create NAT Gateway:
- Deploy a NAT Gateway in the public subnet for internet access from private subnets.
-
Configure Security Groups:
- Allow necessary inbound and outbound traffic.
-
Set Up Route 53:
- Manage domain name registration and DNS records.
-
Use AWS Certificate Manager:
- Manage SSL/TLS certificates for secure communication.
-
Create Personal Access Token:
- Docker will use this token to clone the Application Code repository when building the Docker image.
-
Set Up Project Folder:
- Create a folder in Visual Studio Code to host necessary files such as Dockerfile and AppServiceProvider.php.
-
Build Dockerfile:
- Create a Dockerfile to build the Docker image, updating values and information as needed.
-
Replace AppServiceProvider.php:
- Enter the script into the new AppServiceProvider.php file to ensure proper redirection from HTTP to HTTPS.
-
Manage Sensitive Information:
- Rename Dockerfile to
Dockerfile-reference
and create a.gitignore
file to prevent committing sensitive information to GitHub.
- Rename Dockerfile to
-
Create New Dockerfile:
- Create a new Dockerfile in the
rentzone
folder, incorporating Build Arguments and Environment Variables.
- Create a new Dockerfile in the
-
Build Docker Image:
- Write a script (
build_image.sh
) to build the Docker image, setting values for Build Arguments.
- Write a script (
-
Make Shell Script Executable:
- Run
chmod +x build_image.sh
to make the shell script executable.
- Run
-
Execute Build Script:
- Run the
build_image.sh
script to build the Docker image via Visual Studio Code’s integrated terminal.
- Run the
- Visit AWS CLI Installation Guide.
- Follow the instructions specific to your operating system to install the AWS CLI via the command line.
- Create an IAM user with administrative access. Generate a secret access key and access key ID for this user to authenticate with AWS and push container images to ECR.
- Run the following command in Command Prompt or Terminal to configure the AWS CLI with your IAM user credentials:
Enter the Access Key ID and Secret Access Key when prompted.
aws configure
-
Create a repository in Amazon ECR using the AWS CLI:
aws ecr create-repository --repository-name <repository-name> --region <region>
-
Tag and push the Docker image to the ECR repository. Replace placeholders with the actual tag name and repository URI:
docker tag <image-tag> <repository-uri> docker push <repository-uri>
-
Log in to Amazon ECR and push the Docker image:
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com docker push <repository-uri>
- Create a Key Pair for SSH access to the management instance.
- Download the Key Pair
.pem
file and move it to your PowerShell or terminal directory to simplify SSH commands.
- Launch a Bastion Host instance using the Amazon Linux 2 AMI and T2 Micro instance type.
- Attach the Key Pair created earlier and select the appropriate VPC.
- Choose the Public AZ1 Subnet and assign the Bastion Host Security Group.
-
Download Flyway Community Version and open the Flyway folder in Visual Studio Code.
-
Under the
conf
directory, create aflyway.conf
file with the following configuration:flyway.url=jdbc:mysql://localhost:3306/ flyway.user= flyway.password= flyway.locations=filesystem:sql flyway.cleanDisabled=false
-
Update the
flyway.conf
file with your RDS configuration details. -
Add SQL scripts to the
sql
directory within the Flyway folder. -
Rename the SQL script to the format
version__description.sql
for Flyway compatibility.
-
To set up an SSH tunnel, use the following command based on your operating system:
- PowerShell:
ssh -i <key_pair.pem> ec2-user@<public-ip> -L 3306:<rds-endpoint>:3306 -N
- Linux/macOS:
ssh -i "YOUR_EC2_KEY" -L LOCAL_PORT:RDS_ENDPOINT:REMOTE_PORT EC2_USER@EC2_HOST -N -f
- PowerShell:
-
Open a terminal, navigate to where your Key Pair is stored, and execute the command.
-
In the Flyway directory, run the Flyway migration command:
./flyway migrate
- First, create a Target Group with IPv4 address type and HTTP on port 80.
- Create the Application Load Balancer in the VPC, selecting Public Subnet AZ1 and AZ2. Use the Application Load Balancer Security Group created earlier.
- Request an SSL certificate from AWS Certificate Manager, using DNS validation.
- Once requested, create a Route 53 record set to validate domain ownership.
- Create an HTTPS Listener for the Application Load Balancer using the SSL certificate.
- Configure the Listener to use port 443.
-
Create an environment file named
rentzone.env
to store Docker environment variables:PERSONAL_ACCESS_TOKEN= GITHUB_USERNAME= REPOSITORY_NAME= WEB_FILE_ZIP= WEB_FILE_UNZIP= DOMAIN_NAME= RDS_ENDPOINT= RDS_DB_NAME= RDS_MASTER_USERNAME= RDS_DB_PASSWORD=
-
Add
rentzone.env
to the.gitignore
file to prevent it from being tracked by Git.
- Create an S3 Bucket in the same region as your VPC.
- Upload the
rentzone.env
file to the S3 Bucket.
- Create an IAM Role for ECS with permissions to access the S3 bucket.
- Assign the role to ECS with inline policies for
S3:GetObject
andS3:GetBucketLocation
.
- Create an ECS Cluster and assign the VPC.
- Select Private App Subnet AZ1 and AZ2.
- Create a Task Definition specifying CPU, memory, and IAM role.
- Retrieve the image URI from the ECR repository.
- Configure environment variables using the S3 bucket ARN.
-
Create the ECS Service:
- Select the existing ECS cluster you created.
- Choose 'Use custom' and select the Task Definition created earlier under Application Type.
-
Configure Desired Tasks:
- Set the number of desired tasks to 2.
-
Configure Networking:
- Ensure the VPC environment is selected.
- Enable Private App Subnet AZ1 and AZ2.
-
Select Security Group:
- Choose the existing Security Group and remove the default Security Group.
-
Public IP Settings:
- Turn off Public IP settings as the container is in a private subnet.
-
Load Balancing:
- Select the existing Application Load Balancer.
-
Listener Configuration:
- Choose Port 80 HTTP as the existing Listener.
-
Target Group:
- Select the existing Target Group.
-
Service Auto Scaling:
- Set Auto Scaling policies for minimum and maximum tasks:
- Minimum number of tasks: 1
- Maximum number of tasks: 4
- Configure Policy Name, Target Value, Scale-out cooldown period, and Scale-in cooldown period.
- Set Auto Scaling policies for minimum and maximum tasks:
-
Verify Service:
- After a few minutes, check the ECS Service in the ECS console. Under the Health and Metrics tab, you should see 2 healthy targets running.
-
Create a Record Set:
- Go to the Route 53 hosted zone.
- Select the domain name and set the record type to 'A - record type.'
- Set the Record name to 'www.'
-
Route Traffic:
- Route traffic to 'Alias to Application and Classic Load Balancer.'
- Select the region where the Application Load Balancer was created.
- Choose the Application Load Balancer.
-
Clone the Repository:
git clone <repository-url>
-
Create Required Resources:
- Follow AWS documentation to set up the necessary resources such as VPC, subnets, Internet Gateway, etc.
-
Set Up the Application:
- Use the provided scripts to deploy the WordPress application on EC2 instances within the VPC.
-
Configure Services:
- Set up Auto Scaling Group, Load Balancer, and other services according to the architecture.
-
Access the Application:
- Access the WordPress website via the Load Balancer's DNS name.
- AWS Documentation: Detailed guides on setting up VPC, EC2, Auto Scaling, Load Balancer, and other services.
- GitHub Repository Files: Access repository files for scripts, architectural diagrams, and configuration files at https://github.com/Jundyn/Host-a-Dynamic-Web-App-on-AWS-with-Docker-Amazon-ECR-and-ECS.
Contributions are welcome! Please fork the repository and submit a pull request with your enhancements.
- Scalable and Reliable Infrastructure: Automated Docker image builds and deployments to AWS ECS, reducing deployment time and minimizing human error.
- Secure and Efficient Image Management: Used AWS ECR for secure Docker image management, addressing version control and security.
- Database Management: Utilized AWS RDS for MySQL, including read replicas for load distribution and effective database management.
- High Availability: Ensured application availability by deploying across two Availability Zones and using Elastic Load Balancing.
- Cost Management: Optimized costs through AWS Cost Explorer, AWS Budgets, and Reserved Instances.
- AWS Authentication and Permissions: Managed IAM roles and permissions carefully to avoid access errors. Verified and set up IAM policies with the least privilege principle.
- Increasing Costs: Mitigated costs by turning off unused resources and updating Dockerfile with new Elastic IP and RDS Endpoint information.
- Docker Image Management: Addressed image creation, tagging, and pushing challenges with careful management.
- Network Configuration: Resolved networking issues by double-checking configurations and ensuring correct setup of VPCs, Security Groups, and Load Balancers.
- Docker Image Management: Automated build and push processes using CI/CD pipelines to ensure consistency and reduce human error.
- Network Configuration: Utilized Terraform and AWS CloudFormation for standardizing and automating network resource creation.
- AWS Linux and Bash: Enhanced skills in AWS Linux and bash scripting through practical use in Visual Studio Code.
- Networking: Gained knowledge in VPC and NAT Gateway configuration for internet access and resource connectivity.
- IAM Management Best Practices: Avoided using root accounts for cloud creation, enforced MFA, and regularly updated root account security.
- Dockerfile Best Practices: Created a
.gitignore
file to prevent sensitive information from being committed to GitHub. - AWS ECS and Docker Integration: Improved understanding of container orchestration and management with AWS ECS and Docker.
-
Clone the Repository
git clone https://github.com/your-repo/your-project.git cd your-project
-
Create Dockerfile for the dynamic web app:
# Use the latest version of the Amazon Linux base image
FROM amazonlinux:2
# Update all installed packages to their latest versions
RUN yum update -y
# Install necessary packages
RUN yum install -y unzip wget httpd git
# Install PHP and MySQL
RUN amazon-linux-extras enable php7.4 && yum install -y \
php php-common php-pear php-cgi php-curl php-mbstring php-gd php-mysqlnd \
php-gettext php-json php-xml php-fpm php-intl php-zip mysql-community-server
# Configure MySQL
RUN wget https://repo.mysql.com/mysql80-community-release-el7-3.noarch.rpm && \
rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022 && \
yum localinstall mysql80-community-release-el7-3.noarch.rpm -y && \
yum install mysql-community-server -y
# Clone and prepare the application
WORKDIR /var/www/html
ARG PERSONAL_ACCESS_TOKEN
ARG GITHUB_USERNAME
ARG REPOSITORY_NAME
ARG WEB_FILE_ZIP
ARG WEB_FILE_UNZIP
ARG DOMAIN_NAME
ARG RDS_ENDPOINT
ARG RDS_DB_NAME
ARG RDS_MASTER_USERNAME
ARG RDS_DB_PASSWORD
ENV PERSONAL_ACCESS_TOKEN=$PERSONAL_ACCESS_TOKEN \
GITHUB_USERNAME=$GITHUB_USERNAME \
REPOSITORY_NAME=$REPOSITORY_NAME \
WEB_FILE_ZIP=$WEB_FILE_ZIP \
WEB_FILE_UNZIP=$WEB_FILE_UNZIP \
DOMAIN_NAME=$DOMAIN_NAME \
RDS_ENDPOINT=$RDS_ENDPOINT \
RDS_DB_NAME=$RDS_DB_NAME \
RDS_MASTER_USERNAME=$RDS_MASTER_USERNAME \
RDS_DB_PASSWORD=$RDS_DB_PASSWORD
RUN git clone https://$PERSONAL_ACCESS_TOKEN@github.com/$GITHUB_USERNAME/$REPOSITORY_NAME.git && \
unzip $REPOSITORY_NAME/$WEB_FILE_ZIP -d $REPOSITORY_NAME/ && \
cp -av $REPOSITORY_NAME/$WEB_FILE_UNZIP/. /var/www/html && \
rm -rf $REPOSITORY_NAME && \
sed -i '/<Directory "\/var\/www\/html">/,/<\/Directory>/ s/AllowOverride None/AllowOverride All/' /etc/httpd/conf/httpd.conf && \
chmod -R 777 /var/www/html && \
chmod -R 777 storage/ && \
sed -i '/^APP_ENV=/ s/=.*$/=production/' .env && \
sed -i "/^APP_URL=/ s/=.*$/=https:\/\/$DOMAIN_NAME\//" .env && \
sed -i "/^DB_HOST=/ s/=.*$/=$RDS_ENDPOINT/" .env && \
sed -i "/^DB_DATABASE=/ s/=.*$/=$RDS_DB_NAME/" .env && \
sed -i "/^DB_USERNAME=/ s/=.*$/=$RDS_MASTER_USERNAME/" .env && \
sed -i "/^DB_PASSWORD=/ s/=.*$/=$RDS_DB_PASSWORD/" .env
COPY AppServiceProvider.php app/Providers/AppServiceProvider.php
EXPOSE 80 3306
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
- Build Docker Image:
docker build --build-arg PERSONAL_ACCESS_TOKEN=<your_token> \
--build-arg GITHUB_USERNAME=<your_username> \
--build-arg REPOSITORY_NAME=<your_repo> \
--build-arg WEB_FILE_ZIP=<your_zip> \
--build-arg WEB_FILE_UNZIP=<your_unzip> \
--build-arg DOMAIN_NAME=<your_domain> \
--build-arg RDS_ENDPOINT=<rds_endpoint> \
--build-arg RDS_DB_NAME=<rds_db_name> \
--build-arg RDS_MASTER_USERNAME=<rds_username> \
--build-arg RDS_DB_PASSWORD=<rds_password> \
-t <image-tag> .
- Push Docker Image to Amazon ECR:
# Create ECR repository
aws ecr create-repository --repository-name <repository-name> --region <region>
# Login to ECR and push Docker image
aws ecr get-login-password | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
docker tag <image-tag> <repository-uri>
docker push <repository-uri>
-
Set Environment Variables
aws s3 cp s3://your-bucket-name/env-vars.json .
-
Run Database Migrations
flyway -url=jdbc:postgresql://<db-endpoint>:5432/your-db -user=your-username -password=your-password migrate
Deployment scripts and configurations are provided in the GitHub repository. Ensure to review and adjust them according to your environment and requirements.
The deployed web page is available in the
Here are some useful resources that can help you understand and customize the deployment process further:
Feel free to customize the scripts and configurations according to your specific needs. This README is designed to provide a comprehensive guide that showcases the skills and processes involved in this DevOps project.