facebookresearch/ELF

It failed on higher version of torch(0.3.0) because of the change of APIs

qzhong0605 opened this issue · 1 comments

Hi,
I deployed the ELF on my server, but I founded that when I run on the torch(0.3.0), it reported the following errors:

Traceback (most recent call last):
File "train.py", line 23, in
model = env["model_loaders"][0].load_model(GC.params)
File "/root/ELF/rlpytorch/model_loader.py", line 95, in load_model
model.cuda(device_id=args.gpu)
TypeError: cuda() got an unexpected keyword argument 'device_id'

Traceback (most recent call last):
File "train.py", line 25, in
env["mi"].add_model("actor", model, copy=True, cuda=all_args.gpu is not None, gpu_id=all_args.gpu)
File "/root/ELF/rlpytorch/model_interface.py", line 98, in add_model
self.models[key].cuda(device_id=gpu_id)
TypeError: cuda() got an unexpected keyword argument 'device_id'

I found that the module file(torch/nn/modules/module.py) for torch is written on the following:

202 def cuda(self, device=None):
203 """Moves all model parameters and buffers to the GPU.
204
205 This also makes associated parameters and buffers different objects. So
206 it should be called before constructing optimizer if the module will
207 live on GPU while being optimized.
208
209 Arguments:
210 device (int, optional): if specified, all parameters will be
211 copied to that device
212
213 Returns:
214 Module: self
215 """
216 return self._apply(lambda t: t.cuda(device))

I think the model function call on rlpytorch/model_interface.py and rlpytorch/model_loader.py must be changed simultaneously

You can change device_id to device. And it seems to work just fine.