spacewalk01/tensorrt-yolov9

模型加载阶段报错

William9Baker opened this issue · 4 comments

代码在运行到初始化模型的时候Yolov9 model(engine_file_path)报错,具体定位到的是 context = engine->createExecutionContext(),请问这个是什么原因呢?谢谢了

Will you put here the error message?

Make sure your engine file exists in your engine path

Will you put here the error message?

报错定位到这行engine = runtime->deserializeCudaEngine(engineData.get(), modelSize) 输出的engine为空,模型路径都是正确加载的

answer by @spolisetty

The following few reasons could cause the deserializeCudaEngine() function to fail:

  • The GPU does not have enough memory to load the TRT engine.
  • Corrupted TRT engine file.
  • Incorrect engine buffer size or pointer
  • TensorRT’s current version is different from the TRT engine build time version.

You need to set the TRT logging to VERBOSE or DEBUG in order to get the error messages. Then the error messages and details about memory usage can be discovered in the logs.

You can also use the TRT profiler to know how much host or GPU memory is used by the model.
Please refer Developer Guide :: NVIDIA Deep Learning TensorRT Documentation 10 for more info.

Link: https://forums.developer.nvidia.com/t/what-causes-the-deserializecudaengine-fail-and-how-to-get-the-error-message/253120