tensorrt-inference-server

There are 4 repositories under tensorrt-inference-server topic.

  • cap-ntu/ML-Model-CI

    MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging the gap between current ML training and serving systems.

    Language:Python1911811733
  • DataXujing/TensorRT_CV

    :rocket::rocket::rocket:NVIDIA TensorRT 加速推断教程!

    Language:CSS1324620
  • chiehpower/Setup-deeplearning-tools

    Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.

    Language:Python43227
  • rmccorm4/TRTIS-Go-Client

    🖧 Simple gRPC client in Go to communicate with TensorRT Inference Server

    Language:Go531