In the dynamic realm of cloud technologies, orchestrating web applications within containerized environments is a common practice for software systems, including ML-based deployments. This project offers hands-on exploration of deploying ML-based web applications using an orchestrator/worker architecture on the AWS cloud platform.
Containerization, facilitated by technologies such as Docker, allows developers to encapsulate their applications with all necessary dependencies into portable containers. This ensures consistent application performance across different computing environments.
As applications scale, orchestration becomes essential. It involves managing multiple containers, ensuring scalability, and automating deployment. Tools like Docker-compose and Kubernetes are commonly used for orchestration.
In this style, a central entity (main) manages multiple independent virtual computing instances (workers). This design efficiently parallelizes computing scenarios. The main entity distributes tasks, coordinates, and controls, while workers execute tasks independently.
The goal of this project is to automate the deployment of an ML model using a containerized approach, following the main/worker architectural style. The deployment architecture is shown below. Steps include:
Create Docker files for both worker and orchestrator. For workers, include a docker-compose.yaml
file to handle multiple containers on the same instance. These files are available on our GitHub repository.
Launch four EC2 instances with m4.large specifications and 32 GB storage. Install Docker and Docker Compose, clone the worker source code, build Docker images, and execute containers. Test the setup at "worker_IP:port1/run_model" and "worker_IP:port2/run_model".
Complete the orchestrator code to forward requests to workers. Deploy a single EC2 instance (m4.large, 16GB storage), install Docker, clone source code, build the Docker image, and run it.
- Add AWS credentials to
~/.aws/credentials
. - Clone the repository and navigate to the infrastructure directory:
git clone git@github.com:mh-malekpour/Deploying-ML-Model-on-AWS.git cd Deploying-ML-Model-on-AWS/infrastructure
- Set up and activate a virtual environment:
python -m virtualenv venv source venv/bin/activate
- Install dependencies:
pip install -r requirements.txt
- Execute the main script to set up the infrastructure:
python main.py
Execute python workers.py
to set up workers.
Execute python orchestrator.py
to set up the orchestrator.
To send requests to the orchestrator:
- Navigate to the client directory:
cd Deploying-ML-Model-on-AWS/client
- Execute the client script:
python main.py
This project demonstrates the automated deployment of ML-based web Flask applications in a containerized manner, following the main/worker architectural style.