facebookresearch/TorchRay

Saliency map for multiple GPUs

huygens12 opened this issue · 1 comments

Hi, thanks for this amazing repo!

I've encountered an issue while using multiple GPUs to generate saliency maps with torchray.attribution. My model is wrapped in torch.nnDataParallel and I'm trying to use 4 GPUs. However, when the batch_size is set to be 4*m, the first dimension is returned saliency map is always m. I've checked the code, and it seems this issue occurs in class Probe in common.py. It seems that it probes gradient in one device. Do you have any ideas on solving this problem?

Hi @huygens12,

Sorry for the delayed reply. Currently, multiple GPUs is not supported. I'd recommend moving the model to a single GPU. We may look into supporting multiple GPUs in the future.