yasenh/libtorch-yolov5

Does anyone run GPU inference successfully?

Jelly123456 opened this issue · 5 comments

I could not run the inference with GPU enabled. I follow the instructions to modify the export.py code to export the torchscript model with GPU, but when inferring with libtorch, it cannot load the weight.

Does anyone know how to solve it?

My OS is windows 10 and it is able to run the CPU torchscript model.

Thanks in advance.

Hi @Jelly123456 , what is the error message? I tested few days ago (in Ubuntu) and it works fine. What is your pytorch version and did you pull the latest yolov5 python version?

@yasenh Thanks very much for your reply.
There is no error message. This is the output when running with GPU:

image

pytorch version is V1.6 and I am using the latest yolov5 version.

@Jelly123456 I tested locally without any issue, it seems crashed during "module_.forward(inputs)", not sure if it is related to Windows.. Maybe you can follow PyTorch official example and make sure it works on you GPU

Feel free to share your experience

@yasenh Thanks very much for your testing.

I did some internet search and found that this could be the bug for libtorch in Windows.
pytorch/pytorch#23217
https://github.com/yf225/pytorch-cpp-issue-tracker/issues/378