Issues
- 0
好久不维护了吗?
#39 opened by Hongyuan-Liu - 12
- 11
- 4
Could you provide simple tutorial on how to run onnx_to_plugin for simple operator?
#35 opened by mhmmdjafarg - 1
KeyError int8
#37 opened by ngockhanh5110 - 2
out of memeory
#33 opened by frankxyy - 1
Is maintained of this repo?
#26 opened by cxiang26 - 0
unsupported ptx version error
#32 opened by frankxyy - 6
so build succeed, tensorrt run error
#34 opened by frankxyy - 4
- 1
- 6
Error when running one_hot example
#29 opened by frankxyy - 8
Half model error
#28 opened by NingNanXin - 2
Support for dynamic shape ?
#22 opened by sleepwalker2017 - 1
cuda kernel code generated by Ansor‘s search space will use shared memory optimization to auto tuning?
#25 opened by wugoukanle - 2
precision for one hot plugin is wrong
#24 opened by wugoukanle - 1
No radical Subgraph optimization for TensorRT
#23 opened by wugoukanle - 1
Support CUDA 11.5 and TensorRT 8.2.1.3?
#20 opened by hxcai - 1
无法跑通 example
#21 opened by scse-l - 1
RandomNormal not supported for frontend ONNX
#19 opened by sunkenQ - 4
test_tpat.py error
#5 opened by GeneralJing - 1
- 4
When will dynamic BatchSize be supported
#6 opened by zhaohb - 2
Does TPAT support grid_sample?
#14 opened by dingjingzhen - 11
Conversion Error for IsInf OP
#18 opened by debrekXuHan - 1
- 3
test_tpat error
#16 opened by hpz4311 - 2
when to support scan operator?
#15 opened by liukaiyueyuo - 1
- 0
what‘s the blazerml-tvm build error below?
#12 opened by liukaiyueyuo - 1
- 4
Is it custom operator supported?
#11 opened by qingshanxiaozi - 0
can not find project_libbacktrace and report an error while building tvm form source
#10 opened by qingshanxiaozi - 2
Cuda Error in execute: 209 (no kernel image is available for execution on the device)
#7 opened by qraleq - 2
Is sparse convolution now supported?
#4 opened by GeneralJing - 9
Can't build TPAT
#1 opened by heluocs - 0
Docker image
#2 opened by LegendSun0