TorchServe is a flexible and easy to use tool for serving PyTorch models.
For full documentation, see Model Server for PyTorch Documentation.
- Frontend: The request/response handling component of TorchServe. This portion of the serving component handles both request/response coming from clients as well manages the models lifecycle.
- Model Workers: These workers are responsible for running the actual inference on the models. These are actual running instances of the models.
- Model: Models could be a
script_module
(JIT saved models) oreager_mode_models
. These models can provide custom pre- and post-processing of data along with any other model artifacts such as state_dicts. Models can be loaded from cloud storage or from local hosts. - Plugins: These are custom endpoints or authz/authn or batching algorithms that can be dropped into TorchServe at startup time.
- Model Store: This is a directory in which all the loadable models exist.
Conda instructions are provided in more detail, but you may also use pip
and virtualenv
if that is your preference.
Note: Java 11 is required. Instructions for installing Java 11 for Ubuntu or macOS are provided in the Install with Conda section.
-
Install Java 11
sudo apt-get install openjdk-11-jdk
-
Use
pip
to install TorchServe and the model archiver:pip install torch torchtext torchvision sentencepiece psutil future pip install torchserve torch-model-archiver
Note: For Conda, Python 3.8 is required to run Torchserve
-
Install Java 11
sudo apt-get install openjdk-11-jdk
-
Install Conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html)
-
Create an environment and install torchserve and torch-model-archiver For CPU
conda create --name torchserve torchserve torch-model-archiver psutil future pytorch torchtext torchvision -c pytorch -c powerai
For GPU
conda create --name torchserve torchserve torch-model-archiver psutil future pytorch torchtext torchvision cudatoolkit=10.1 -c pytorch -c powerai
-
Activate the environment
source activate torchserve
-
Optional if using torchtext models
pip install sentencepiece
-
Install Java 11
brew tap AdoptOpenJDK/openjdk brew cask install adoptopenjdk11
-
Install Conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/macos.html)
-
Create an environment and install torchserve and torch-model-archiver
conda create --name torchserve torchserve torch-model-archiver psutil future pytorch torchtext torchvision -c pytorch -c powerai
-
Activate the environment
source activate torchserve
-
Optional if using torchtext models
pip install sentencepiece
Now you are ready to package and serve models with TorchServe.
If you plan to develop with TorchServe and change some of the source code, you must install it from source code.
Please deactivate any conda env that you might be within. Run the following script from the top of the source directory.
NOTE: This script uninstalls existing torchserve
and torch-model-archiver
installations
Verified on EC2 instances running Ubuntu DL AMI 28.x
./scripts/install_from_src_ubuntu
./scripts/install_from_src_macos
For information about the model archiver, see detailed documentation.
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already installed TorchServe and the model archiver.
To run this example, clone the TorchServe repository:
git clone https://github.com/pytorch/serve.git
Then run the following steps from the parent directory of the root of the repository.
For example, if you cloned the repository into /home/my_path/serve
, run the steps from /home/my_path
.
To serve a model with TorchServe, first archive the model as a MAR file. You can use the model archiver to package a model. You can also create model stores to store your archived models.
-
Create a directory to store your models.
mkdir model_store
-
Download a trained model.
wget https://download.pytorch.org/models/densenet161-8d451a50.pth
-
Archive the model by using the model archiver. The
extra-files
param uses fa file from theTorchServe
repo, so update the path if necessary.torch-model-archiver --model-name densenet161 --version 1.0 --model-file ./serve/examples/image_classifier/densenet_161/model.py --serialized-file densenet161-8d451a50.pth --export-path model_store --extra-files ./serve/examples/image_classifier/index_to_name.json --handler image_classifier
For more information about the model archiver, see Torch Model archiver for TorchServe
After you archive and store the model, use the torchserve
command to serve the model.
torchserve --start --ncs --model-store model_store --models densenet161.mar
After you execute the torchserve
command above, TorchServe runs on your host, listening for inference requests.
Note: If you specify model(s) when you run TorchServe, it automatically scales backend workers to the number equal to available vCPUs (if you run on a CPU instance) or to the number of available GPUs (if you run on a GPU instance). In case of powerful hosts with a lot of compute resoures (vCPUs or GPUs). This start up and autoscaling process might take considerable time. If you want to minimize TorchServe start up time you avoid registering and scaling the model during start up time and move that to a later point by using corresponding Management API, which allows finer grain control of the resources that are allocated for any particular model).
To test the model server, send a request to the server's predictions
API.
Complete the following steps:
- Open a new terminal window (other than the one running TorchServe).
- Use
curl
to download one of these cute pictures of a kitten and use the-o
flag to name itkitten.jpg
for you. - Use
curl
to sendPOST
to the TorchServepredict
endpoint with the kitten's image.
The following code completes all three steps:
curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
curl http://127.0.0.1:8080/predictions/densenet161 -T kitten.jpg
The predict endpoint returns a prediction response in JSON. It will look something like the following result:
[
{
"tiger_cat": 0.46933549642562866
},
{
"tabby": 0.4633878469467163
},
{
"Egyptian_cat": 0.06456148624420166
},
{
"lynx": 0.0012828214094042778
},
{
"plastic_bag": 0.00023323034110944718
}
]
You will see this result in the response to your curl
call to the predict endpoint, and in the server logs in the terminal window running TorchServe. It's also being logged locally with metrics.
Now you've seen how easy it can be to serve a deep learning model with TorchServe! Would you like to know more?
To stop the currently running TorchServe instance, run the following command:
torchserve --stop
You see output specifying that TorchServe has stopped.
TorchServe exposes configurations which allows the user to configure the number of worker threads on CPU and GPUs. This is an important config property that can speed up the server depending on the workload.
Note: the following property has bigger impact under heavy workloads.
If TorchServe is hosted on a machine with GPUs, there is a config property called number_of_gpu
which tells the server to use a specific number of GPU per model. In cases where we register multiple models with the server, this will apply to all the models registered. If this is set to a low value (ex: 0 or 1), it will result in under-utilization of GPUs. On the contrary, setting to a high value (>= max GPUs available on the system) results in as many workers getting spawned per model. Clearly, this will result in unnecessary contention for GPUs and can result in sub-optimal scheduling of threads to GPU.
ValueToSet = (Number of Hardware GPUs) / (Number of Unique Models)
Refer torchserve docker for details.
- Full documentation on TorchServe
- Manage models API
- Inference API
- Package models for use with TorchServe
- TorchServe model zoo for pre-trained and pre-packaged models-archives
We welcome all contributions!
To learn more about how to contribute, see the contributor guide here.
To file a bug or request a feature, please file a GitHub issue. For filing pull requests, please use the template here. Cheers!
TorchServe acknowledges the Multi Model Server (MMS) project from which it was derived