ML Project template used to bootstrap ML projects requiring compute-intensive jobs.
- Batching of ML jobs to run asynchronously using Redis.
- FastAPI used as the application server.
- Nginx used as a reverse proxy.
- Python ML workers receive jobs from Redis.
- Web server: Nginx
- Application server: FastAPI
- ML Workers: Python workers
- Message Queue: Redis
- Clientside: Sapper
- Docker-compose for the deployment of services.
cd MLtiplier
docker-compose build
docker-compose up
- Add your ML workload to run here backend_services/ml_worker/ml_worker/main.py
- ML job payload is added to Redis from the backend_services/app_server/app_server/main.py
- Update ui/ directory to match the clientside required for Mltiplier
- Writing tests
- Code review
- Other guidelines