jkulhanek/robot-visual-navigation

Dataset problem

Closed this issue · 9 comments

Hi! After having installed all the necessary requirements to be able to use the repository on my pc, I have tried to run the code as described in the help files. However I get this error that seems to be related to a dataset , and I can't find a way to fix it.

Thanks in advance.

Captura de pantalla de 2021-10-27 13-57-56
Captura de pantalla de 2021-10-27 13-58-18

It means you don't have the "turtle_room" dataset to be able to run the experiment. Can you try running dmhouse experiments? They use simulated environment (DeepMind Lab). Alternatively, I can provide you with our "turtle_room" dataset if you want.

If it is not too much trouble, I would like you to provide me with "turtle_room" dataset, although I will try with dmhouse experiments as well..

Thanks and sorry for bothering you.

I am sorry it took me so long. The dataset can be downloaded here: https://storage.googleapis.com/robot-visual-navigation-datasets/turtle_room_grid_compiled.hdf5

Hi, I got the same problem and solved this problem by using the .hdf5 file you provided. However, another new issue occurs.


================================================================
Using CPU only
Traceback (most recent call last):
File "train.py", line 49, in
trainer.run()
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/core.py", line 219, in run
return self.trainer.run(self.process)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/common/train_wrappers.py", line 47, in run
ret = super().run(*args, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/core.py", line 207, in run
return self.trainer.run(process, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/core.py", line 207, in run
return self.trainer.run(process, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/common/train_wrappers.py", line 126, in run
return super().run(_late_process, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/core.py", line 207, in run
return self.trainer.run(process, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/core.py", line 242, in run
tdiff, _, _ = process(mode='train', context=dict())
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/common/train_wrappers.py", line 120, in _late_process
data = process(*args, context = context, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/common/train_wrappers.py", line 24, in process
res = self.trainer.process(**kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/common/train_wrappers.py", line 65, in process
tdiff, episode_end, stats = self.trainer.process(**kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/common/train_wrappers.py", line 179, in process
tdiff, episode_end, stats = self.trainer.process(mode = mode, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/actor_critic/unreal/unreal.py", line 284, in process
self._sample_experience_batch()
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/actor_critic/unreal/unreal.py", line 320, in _sample_experience_batch
actions, values, action_log_prob, states = self._step(self.rollouts.observations, self.rollouts.masks, self.rollouts.states)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/utils.py", line 63, in call
results = function(*to_tensor(args, device), **to_tensor(kwargs, device))
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/actor_critic/unreal/unreal.py", line 221, in step
policy_logits, value, states = model(observations, masks, states)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/python/model.py", line 108, in forward
features, states = self._forward_base(inputs, masks, states)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/python/model.py", line 117, in _forward_base
image, goal = self.shared_base(image), self.shared_base(goal)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/media/agent/eb0d0016-e15f-4a25-8c28-0ad31789f3cb/ROS/robot-visual-navigation/deep-rl-pytorch/deep_rl/model/module.py", line 18, in forward
results = self.inner(*args)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/home/agent/anaconda3/envs/deeprl/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 341, in conv2d_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Given groups=1, weight of size 16 16 8 8, expected input[16, 3, 84, 84] to have 16 channels, but got 3 channels instead

Please post the exact command you used.

The command was " python train.py turtlebot ".
I guess the issue may be caused by the following line.

model = Model(self.env.observation_space.spaces[0].spaces[0].shape[0], self.env.single_action_space.n)

If " model = Model(self.env.observation_space.spaces[0].spaces[0].shape[1], self.env.single_action_space.n)" was used, the issue disappeared and the code works without any errors. But I'm not sure whether this is the correct solution.

Yes, it is the correct solution. I don’t know, how this could have happened, because the code definitely worked at the point of writing it. It might be the case that the older gym version didn’t use batch shape in VectorEnv observation shape? It will likely be in other places like test-env.py

Thanks for your kind confirmation.

Fixed