nanmi/yolov7-trt

Cuda failure: 700 in "doInference"

Opened this issue · 0 comments

I transfered the officially provided yolov7.pt to onnx and to trt engine. Then I modified "const char* OUTPUT_BLOB_NAME = "output0";" to "output" since the onnx has output node named "output". Nothing else changed and give me an error like
image

Also, in your yolov7.cpp's main function "int output_size = 6001 * 1*1;" What does this "6001" mean? I suppose it's because of some cuda memory malloc error?