gpu_nums> 1
lianqing11 opened this issue · 0 comments
lianqing11 commented
If want to run on multi gpu, when self.shared is forwarding , should use Modulelist's data (like self._w_h(which is a type of ModuleList)).
Otherwise will raise an error :( RuntimeError: tensors are on different GPUs) , beacuse when self.forward(xx), the parameter are used stored in list data structure, and would not replicate to another gpu.