serving-pytorch-models
There are 12 repositories under serving-pytorch-models topic.
ahkarami/Deep-Learning-in-Production
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
clearml/clearml-serving
ClearML - Model-Serving Orchestration and Repository Solution
balavenkatesh3322/model_deployment
A collection of model deployment library and technique.
gasparian/PicsArtHack-binary-segmentation
Segmenting people on photos using IOS devices [Pytorch; Unet]
SapienzaNLP/usea
Universal Semantic Annotator (LREC 2022)
bcaitech1/p3-dst-chatting-day
Chatting-Day's Dialogue State Tracking (DST)
lukedeo/torch-serving
Simple HTTP serving for PyTorch 🚀
fabridamicelli/torchserve-docker
TorchServe images with specific Python version working out-of-the-box.
ingyuseong/rabbitmq-inference
A message queue based server architecture to asynchronously handle resource-intensive tasks (e.g., ML inference)
Lake-Wang/MLops_System_NBA_Attendance
End-to-end NBA analytics pipeline for predicting game outcomes and attendance using PyTorch, MLflow, and ONNX. Includes data scraping, model training, quantization, and scalable deployment with FastAPI and Triton Inference Server.
nikitajz/pytorch-flask-inference
Serving PyTorch model using flask and docker
IonBoleac/serve-torch-deployments
A proof-of-concept on how to install and use Torchserve in various mode