/docker-pytorch-api

Deploying PyTorch as a RESTAPI using Docker and FastAPI with CUDA support

Primary LanguageJupyter NotebookMIT LicenseMIT

docker-pytorch-api

Deploying PyTorch as a RESTAPI using Docker and FastAPI with CUDA support

Setup

  • Ubuntu 18.04.3 LTS (bionic)
  • Python 3.8
  • Cuda 10.1
  • cudnn7.6.4
  • PyTorch 1.10.0

Running just the model API without Docker

lets start from the very beginning, before any understanding of Docker, just good old python virtual environments, these terminal commands assume you have python 3.8 installed as python3.8 and get you to the old fashioned way of setting up a virtual python environment with the machine learning libraries we need such torch and sklearn

you@you:/path/to/folder$ pip3 install virtualenv

you@you:/path/to/folder$ virtualenv venv --python=python3.8

you@you:/path/to/folder$ source venv/bin/activate

(venv) you@you:/path/to/folder$ pip3 install -r requirementsDS.txt

(venv) you@you:/path/to/folder$ jupyter notebook

navigate to notebook/Model.ipynb to train a toy model and save the model and the preprocessing module for later use by the API

from joblib import dump

torch.save(model.state_dict(), '../model/model.pt')

dump(scaler, '../model/scaler.joblib', compress=True)

in the app/ directory

(venv) you@you:/path/to/folder$  python main.py

Navigate to http://localhost:8080/docs to test the API

If you want to know which files are needed for the model API, it is only those files used by main.py or the scripts imported to main.py

Automated testing

in the app/ directory

(venv) you@you:/path/to/folder$  python -m pytest

Deploying with Docker

Installing Docker on Ubuntu and some basic Docker commands

Here we build the model API into a docker image. All the dependencies our model API needs will be contained inside the image and will have no conflict with other APIs or applications when we scale it.

what are the specs for the Image we want to grab from DockerHub?

What CUDNN am I using?

(venv) you@you:/path/to/folder$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

the below is am example where the cudnn verison is CUDNN 7.6.3

What CUDA version am I using ?

(venv) you@you:/path/to/folder$ nvcc --version

the below is CUDA 10.1

What Ubuntu version am I using?

(venv) you@you:/path/to/folder$ lsb_release -a

The below is version 18.04

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04 LTS
Release:    18.04
Codename:   bionic

Tags for Nvidia GPU Docker Images

NVIDIA Docker Images

If you have my exact specs you would choose

FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04

in your Dockerfile

Whats Dockerfile?

The Dockerfile turns this application into a container. Inside you will see a commented file showing how first an Nvidia CUDA image is first built. Then apt-get and miniconda are used to build python3.8, then PyTorch, then the API and model itself are loaded. Last, set the entrypoint as /start.sh and expose port 80 of the image.

build
(venv) you@you:/path/to/docker-pytorch-api$ bash docker_build.sh

this may take awhile. you can also do

docker build --compress -t ml/project1:latest .

or if you have a Dockerfile not named Dockerfile

docker build -t my/project:latest -f MyDockerfile .

but if you get an error like this

permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock

try

sudo usermod -a -G docker $USER

or

sudo gpasswd -a $USER docker

where $USER is vicki in the case that you have (env) vicki@virtual-machine: ~/home/vicki$

run

docker run with GPUs keep docker running

(venv) you@you:/path$ bash docker_run_local.sh

or

(venv) you@you:/path$ docker run -d -t --name carson-test0 --gpus '"device=0,1"' newnode/carson-test:latest

the -t (pseudo-tty) docker parameter keeps the container running, you will find yourself inside. try ls the -d Runs container in background and print container ID

and to see it running

(venv) you@you:/path/to/docker-pytorch-api$ docker ps

to stop it

(venv) you@you:/path/to/docker-pytorch-api$ docker stop 0ab99d8ab11c

to attach the container to the terminal session

docker exec -it 0ab99d8ab11c /bin/bash

where 0ab99d8ab11c is the <CONTAINER_ID>

Credit and references

Thank you to Ming for the original version of this tutorial

Thank you to Nikolai Janakiev for the toy model