MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry.
MLFlow Tracking Server
MLflow currently offers four components:
To track ML experiments and version models, we need to host the MLFlow Tracking Server or MLFlow Server. To make this task simple, this repository offers the containerized solution for hosting the MLFlow Server in minutes. The only thing that you need to host it on your local system is the Docker Engine.
The documentation contains:
- Descriptions of the files
- Instructions to host the MLFlow Server
- Instructions to use the MLFlow Server
File Name | Description |
---|---|
Dockerfile |
The dockerfile used to create the docker image |
start_server.sh |
The script to start the MLFlow Server |
Step 01: Build the docker image of the MLFlow Server
Run the following command in the directory with the Dockerfile
docker build --network=host -t mlflow-server .
Check the container with the following command
docker images
You should see the mlflow-server
image in the output.
Step 02: Run the lambda function container
Run the following command.
docker run -d --network=host --name=mlflow-server mlflow-server
This will create the lambda function container. Now your container is up and running to process the invocation.
Step 03: Verify the running container
Please execute the command below to verify whether the container is running.
docker ps
You should see the mlflow-server
container running.
Step 04: Launch the MLFlow Server UI
Go to your browser and enter the below url
http://localhost:5000
Your MLFlow Server is now ready to track your ML experiments and model versions.
MLFlow - Model Registry
Author: Pranay Chandekar