tensorrt-conversion
There are 31 repositories under tensorrt-conversion topic.
jolibrain/deepdetect
Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
SthPhoenix/InsightFace-REST
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
THU-MIG/torch-model-compression
针对pytorch模型的自动化模型结构分析和修改工具集,包含自动分析模型结构的模型压缩算法库
BlueMirrors/Yolov5-TensorRT
Yolov5 TensorRT Implementations
ThomasVonWu/SparseEnd2End
End2EndPerception deployment solution based on vision sparse transformer paradigm is open sourced.
emptysoal/TensorRT-YOLOv8
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
col-in-coding/Tensorrt-CV
Using TensorRT for Inference Model Deployment.
YuzhouPeng/unet-tensorrt
this is a tensorrt version unet, inspired by tensorrtx
gpastal24/ViTPose-Pytorch
VitPose without MMCV dependencies
k9ele7en/Triton-TensorRT-Inference-CRAFT-pytorch
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
emptysoal/tensorrt-experiment
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
leandro-svg/SparseInst_TensorRT
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
thaitc-hust/Yolo-TensorRT
Convert yolo models to ONNX, TensorRT add NMSBatched.
emptysoal/TensorRT-v8-YOLOv5-v5.0
Based on TensorRT v8.2, build network for YOLOv5-v5.0 by myself, speed up YOLOv5-v5.0 inferencing
k9ele7en/ONNX-TensorRT-Inference-CRAFT-pytorch
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
bnabis93/tensorrt-toy
tensorrt-toy code
jinyeom/torchrt
Simple tool for PyTorch >> ONNX >> TensorRT conversion
CuteBoiz/TensorRT_Parser_Python
Export (from Onnx) and Inference TensorRT engine with Python
MrLaki5/TensorRT-onnx-dockerized-inference
Dockerized TensorRT inference engine with ONNX model conversion tool and ResNet50 and Ultraface preprocess and postprocess C++ implementation
Nannigalaxy/jetson_tools
Tools for Nvidia Jetson Nano, TX2, Xavier.
k9ele7en/torch2tensorRT-dynamic-CRAFT-pytorch
Convenient Convert CRAFT Text detection pretrain Pytorch model into TensorRT engine directly, without ONNX step between
NusratNB/TensorRT_TF2.4
TensorRT implementation with Tensorflow 2
ggluo/TensorRT-Cpp-Example
C++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
SarthakGarg19/Accelerating-Inference-in-Tensorflow-using-TensorRT
TensorRT optimises any Deep Learning model by not only making it lightweight but also by accelerating its inference speed with an idea to extract every ounce of performance from the model, making it perfect to be deployed at the edge. This repository helps you convert any Deep Learning model from TensorFlow to TensorRT!
KernFerm/pytorch-to-tensorrt-model-converter
This project provides a comprehensive Python script to convert a PyTorch model to an ONNX model and then to a TensorRT engine for NVIDIA GPUs, followed by performing inference using the TensorRT engine. This script is designed to handle the entire conversion process seamlessly.
sasikiran/jetson_tx2_trt_ssd
Jetson TX2 compatible TensorFlow's ssd_mobilenet_v2_coco for TensorRT 6 / JetPack 4.3
akki2503/Image_Classification_Experiments_using_Cifar10
Experimenting with Cifar-10 dataset to understand and implement various Deep Learning Techniques and CNN Architectures for Image Classification.
emrecanaltinsoy/keras2trt
A CLI tool to convert Keras models to ONNX models and TensorRT engines
littletomatodonkey/model_inference
不同backend的模型转换与推理代码
shiyizhang93/trtprog
This project is a notebook of learning TensorRT.
makaveli10/cpptensorrtz
Convert popular Deep learning models to TensorRT using C++ API (preferably)