Pycuda error during Tensorrt inference
PankajJ08 opened this issue · 3 comments
I converted onnx model into trt file on jetson nano using tensorrt 5, cuda 10, python 3.6...
But I'm getting the error related to pycuda.
Traceback is:
File "demo.py", line 30, in
test_image_tensorrt()
File "demo.py", line 13, in test_image_tensorrt
dets, lms = centerface(frame, h, w, threshold=0.35)
File "/ML/CenterFace/prj-tensorrt/centerface.py", line 21, in call
return self.inference_tensorrt(img, threshold)
File "/ML/CenterFace/prj-tensorrt/centerface.py", line 91, in inference_tensorrt
trt_outputs = do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs, stream=stream) # numpy data
File "/ML/CenterFace/prj-tensorrt/centerface.py", line 60, in do_inference
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
File "/ML/CenterFace/prj-tensorrt/centerface.py", line 60, in
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
pycuda._driver.LogicError: cuMemcpyHtoDAsync failed: invalid argument
Maybe you can try tensorrt 7.
I have found the reason for the error. it's due the size of the input of onnx model, it must be 32 * 32. On any other input, an error occured.
@PankajJ08 Can you give more detail?