Jithsaavvy/Deploying-an-end-to-end-keyword-spotting-model-into-cloud-server-by-integrating-CI-CD-pipeline
Future steps
Jithsaavvy opened this issue · 1 comments
Jithsaavvy commented
- Implement data management pipeline for data extraction, validation, data version control etc.
- Use cloud storage services like Amazon S3 bucket to store data, artifacts, predictions.
- Even though exception handling is implemented in the code, it is equally important to write separate test cases for different scenarios.
- Orchestrate the entire workflow as automated pipeline by means of orchestration tools like Airflow, KubeFlow. As this is a small personal project with a static dataset, the pipeline can be created using normal function calls. But, it is crucial and predominant to replace them with orchestration tools for large, scalable and real-time workflows.
- Implement Continuous Training (CT) pipeline along with CI/CD.
Jithsaavvy commented
Added test cases using pytest
and automated a new workflow which runs all tests for every commit.
Future steps - remaining
- Implement data management pipeline for data extraction, validation, data version control, etc.
- Use cloud storage services like Amazon S3 bucket to store data, artifacts, and predictions.
- Orchestrate the entire workflow as an automated pipeline by means of orchestration tools like Airflow, and KubeFlow. As this is a small personal project with a static dataset, the pipeline can be created using normal function calls. But, it is crucial and predominant to replace them with orchestration tools for large, scalable, and real-time workflows.
- Implement Continuous Training (CT) pipeline along with CI/CD.