This repository contains the solution for the X-HEC MLOps Project on the industrialization of Abalone age prediction model.
Build the image docker with the following command:
docker build -t abalone:solution -f Dockerfile.app .
- Run the prediction API:
docker run -d -p 0.0.0.0:8000:8001 -p 0.0.0.0:4200:4201 abalone:solution
Note
The -d
flag is used to run the container in detached mode. The container will thus run in the background.
If you want to get the logs of the container, you can:
i. Get the container ID:
docker ps
ii. Copy/paste the container ID
iii. Get the logs:
docker logs <container_id> --follow
-
In the /predict section, click on "Try it out".
-
Replace the Request body with the data of your choice.
Example:
[
{
"sex": "M",
"length": 0.455,
"diameter": 0.365,
"height": 0.095,
"whole_weight": 0.514,
"shucked_weight": 0.2245,
"viscera_weight": 0.101,
"shell_weight": 0.15
},
{
"sex": "M",
"length": 0.35,
"diameter": 0.265,
"height": 0.09,
"whole_weight": 0.2255,
"shucked_weight": 0.0995,
"viscera_weight": 0.0485,
"shell_weight": 0.07
}
]
- Click on "Execute" to get the prediction. You should get a 201 response with the prediction in the Response body, like this:
{
"predicted_abalone_ages": [
9.28125,
7.90625
]
}
-
You can also visualize the Prefect Flow Runs here: http://localhost:4200/flow-runs
-
When done with the API, you can stop the container with:
docker kill <container_id>
This repository comes with a pre-trained model and pre-fitted encoder (src/web_service/local_objects
). If you want to train the model yourself, you can:
- Install the dependencies in a virtual environment (python 3.9 or higher):
pip install -r requirements.txt
pip install -r requirements-dev.txt
-
Put your data in the
data
folder. The data should be a CSV file with the same columns as the Abalone dataset. -
Run the training script:
python3 src/modelling/main.py data/abalone.csv
This command will override the pre-trained model and encoder in the src/web_service/local_objects
folder.
You can finally rebuild the docker image and run it again to use your new model.