About CPU inferance
pramishp opened this issue · 2 comments
thank you for open sourcing this amazing piece of work.
I tried running the code on windows 10 without GPU withuse_cuda: false
and removing some cuda checks. But, it threw the following error.
return func(*args, **kwargs)
File "demo.py", line 281, in main
full_imgs_list, body_imgs, body_targets = batch
TypeError: cannot unpack non-iterable MemoryPinning object
I also tried running it on my Linux machine with GPU but with use_cuda: true
and use_cuda: false
. The inference time was almost the same. So, I guess the use_cuda
flag is not working.
Can we do the inference on the CPU?
mark
Hi,
Maybe it's late. But I have encountered this issue today. You should set all "pin_memory" parameters to "False" in build.py and demo.py. When the instance from dataloader is created, this parameter is set to True. It improves the training and inference speed when CUDA is available. When using CPU, this parameter should be False.