test.py gets stuck when computing output
Michaelszeng opened this issue · 1 comments
Michaelszeng commented
Hi,
I've followed the instructions in the README thoroughly and have double checked all the steps. However, when running test.py, this line of code:
global_outputs, refine_output = model(input_var)
seems to never finish. In case it simply takes a very long time to run, I've also made a small testing subset of the val2017 folder and the annotations file with just 5 images, yet this line of code still seems to run forever. Upon a keyboard interrupt, the error message seems to have to do with threads acquiring lock (not sure if this is useful to know). Any idea why? Thanks.
Michaelszeng commented
I fixed the issue; I reverted back to the oldest version of pytorch that is compatible with CUDA 10.2 and it worked.