k9ele7en/Triton-TensorRT-Inference-CRAFT-pytorch
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
PythonBSD-3-Clause
Stargazers
- alexanderbin123
- ali-robot
- anjali-chadhaSan Francisco, California
- bdockbockd
- cyberorg012
- dbasbabasi
- dimabenderaRIA.com
- egochaoViet Nam
- elejke
- Goga1992
- GuydadaNvidia
- HoracceFeng
- huyhoang17HUST
- huyunleichina
- hyaihjq
- johnlundbergMagnet Forensics (Griffeye)
- michalwolsNew York
- mickey-cool
- mvandermeulenFivenynes
- narolski@IDENTT
- nguyendanglapbang
- oreo-lpcumt
- phamvanlinh143Viettel AI
- Sayyam-Jain
- tangjicheng46China
- thanit456Siam Commercial Bank
- tonhathuyOhmnilabs
- tunhbd1998
- xxxpsyduckVietnam
- zhangkaifangmaster
- zlszhonglongshen一个可靠的公司