Issues
- 0
Converting on jetson nano
#321 opened by ArgoHA - 1
Error in TensorRT conversion of e5-large model
#339 opened by ndeep27 - 0
about TF-TRT C++ Image Recognition Demo
#337 opened by yucathy - 0
Local rendezvous is aborting with status: NOT_FOUND: TRTEngineCacheResource not yet created while converting a saved model to trt engine
#336 opened by devvaibhav455 - 0
InvalidArgumentError: Graph execution error: Input to reshape is a tensor with 1204224 values, but the requested shape has 4816896
#335 opened by innat - 3
ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
#279 opened by zmylk - 0
Tensorflow TensorRT mismatch
#334 opened by maciejskorski - 0
"Incompatible shapes" error during inference
#333 opened by bugzyz - 0
Serve tf-trt converted model return error: NodeDef mentions attr 'max_batch_size' not in Op: name=TRTEngineOp
#332 opened by biaochen - 1
- 8
Cuda synchronize alternative for profiling
#304 opened by aimilefth - 0
- 3
- 6
- 1
Unable to save gradient functions when exporting a _DefinedFunction when using converter.save('model')
#327 opened by tanpengshi - 1
"ValueError: Failed to import metagraph, check error log for more info." Error
#325 opened by liuxingbin - 2
Failed convert Huggingface Model to Tensorrt
#324 opened by dathudeptrai - 2
example-cpp/mnist_demo build tf-trt example fails
#323 opened by IJunSang - 7
Support for TensorRT 8.0
#263 opened by bdnkth - 0
- 5
- 0
- 4
Is it possible to use tensorrt to speed up original tensorflow t5 exported saved_model?
#306 opened by chenl-chief - 0
how to convert Transformer model with tensorRT ops
#316 opened by lyzKF - 0
Variables saved in converted model
#305 opened by tfeher - 2
Loading the file to build the model failed
#303 opened by AarenWu - 0
where to get the saved models
#296 opened by mingxiaoh - 2
Very Low Validation Accuracy for Resnet50 and Resnet101 models using TF-TRT
#258 opened by Selventhiranraj - 1
- 0
- 3
Inference time using TF-TRT is the same as Native Tensorflow for Object Detection Models (SSD Resnet640x640 and EfficientDetD0)
#287 opened by Abdellah-Laassairi - 0
How to evaluate overall model accuracy of TF_TRT FP32, FP16 and FP08 based image classifier?
#285 opened by smbash2022 - 1
- 1
- 0
- 5
BERT Large (TF2) model conversion fails
#255 opened by tfeher - 2
- 1
- 0
- 0
- 1
Accuracy drops a lot,when after build TensorRTengine
#268 opened by zeqinLi - 0
- 0
the final TRT model is too large
#265 opened by hanikh - 0
- 0
convert int8 engine failed
#262 opened by wangyuehy - 6
- 0
*Core dumped* bug
#261 opened by tdrvlad - 1
Container TF-TRT does not exist
#252 opened by shubhamgajbhiye1994 - 0
- 0
No improvement using TensorRT5
#256 opened by IwakuraRein