tensorrt-inference-server
There are 4 repositories under tensorrt-inference-server topic.
cap-ntu/ML-Model-CI
MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging the gap between current ML training and serving systems.
DataXujing/TensorRT_CV
:rocket::rocket::rocket:NVIDIA TensorRT 加速推断教程!
chiehpower/Setup-deeplearning-tools
Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.
rmccorm4/TRTIS-Go-Client
🖧 Simple gRPC client in Go to communicate with TensorRT Inference Server