maggiez0138/Swin-Transformer-TensorRT
This project aims to explore the deployment of Swin-Transformer based on TensorRT, including the test results of FP16 and INT8.
PythonMIT
Issues
- 2
No improvement using fp16 mode
#7 opened by zjujh1995 - 1
how to deploy swint on the xavier?
#16 opened by GeneralJing - 0
- 0
Export onnx file need add param --quantize?
#14 opened by zhu2bowen - 0
SwinIR fp16 mode output black pics
#13 opened by zhu2bowen - 0
cannot deal with input image w !=h?
#12 opened by zhu2bowen - 2
avergae FPS of each model
#3 opened by Linaom1214 - 1
- 1
- 2
- 1
Where are the diagrams below obtained ?
#4 opened by tensorflowt - 5
Does it support dynamic batch inference?
#2 opened by wangjingg - 1
- 1
Support Swin transformer object detection
#9 opened by manhtd98 - 0
the operator roll to ONNX
#5 opened by linuxmi - 2
question about CUDA Version
#1 opened by PigBroA