google/mannequinchallenge

CPU Support

isConic opened this issue · 3 comments

I'd like to run the inference pass on Mac OS(Mojave), a system that runs AMD/Radeon, rather than NVIDIA/Cuda.

Would there be a quick fix to run the inference pass without GPU acceleration but on CPU resources instead?

Being rather new to pytorch, I'm starting at the stacktrace andlooking at my first error in models/pix2pix.py in the line that declares a cuda model.

new_model = torch.nn.parallel.DataParallel( new_model.cuda(), device_ids=range(torch.cuda.device_count())) 

I can try to make the appropriate changes , but might need help figuring what to replace this with.

fcole commented

Unfortunately I don't have a lot of guidance to give here as I haven't experimented with a CPU version. It might be as simple as removing the cuda() calls, but I suspect pytorch has some hidden way to bite you there.

I ran the davis test script. Every time I would run into a cuda is not enabled error, I would notice that the code has .cuda() appended to it, like you mentioned.

I removed all .cuda() as I ran into them one by one on the stack trace. The script finally started working after I reached the last one. I only had to edit models/pix2pix.py and models/networks.py.

Behold:
image

fcole commented

Great, glad it works!