Add ``rgb`` observation to env settings when training?
Yonggie opened this issue ยท 2 comments
๐ Feature
Using the training config rl_discrete_skill.yaml
I run training script python -u -m habitat_baselines.run
, and found the observation from env.reset
is:
{'head_depth': ...,
'object_embedding': ..., 'ovmm_nav_goal_segmentation': tensor,
'receptacle_segmentation': 'tensor',
'robot_start_compass': tensor([[1.1102e-16]], device='cuda:0'),
'robot_start_gps': tensor([[-0., 0.]], device='cuda:0'),
'start_receptacle': tensor([[14]], device='cuda:0')},
'rewards': tensor([[0.]], device='cuda:0'),
'value_preds': tensor([[0.]], device='cuda:0'),
'returns': tensor([[0.]], device='cuda:0'),
'action_log_probs': tensor([[0.]], device='cuda:0'),
'actions': tensor([[0]], device='cuda:0'),
'prev_actions': tensor([[0]], device='cuda:0'),
'masks': tensor([[False]], device='cuda:0')}
which only contains the depth info.
I wonder if there could be rgb info added? What modification should I made to the config yaml?
I print these info at PPOTrainer
at line 284
:
#...
self._agent = self._create_agent(resume_state)
if self._is_distributed:
self._agent.updater.init_distributed(find_unused_params=False) # type: ignore
self._agent.post_init()
self._is_static_encoder = (
not self.config.habitat_baselines.rl.ddppo.train_encoder
)
self._ppo_cfg = self.config.habitat_baselines.rl.ppo
observations = self.envs.reset()
# show here
observations = self.envs.post_step(observations)
batch = batch_obs(observations, device=self.device)
#...
It made no change when I tried #493 by adding new line:
# in rl_discrete_skill.yaml
defaults:
- /benchmark/ovmm: gaze
- /habitat_baselines: habitat_baselines_rl_config_base
- /habitat_baselines/rl/policy/obs_transforms:
- resize_shortest_edge_base
- center_cropper_base
- /habitat/simulator/sim_sensors@habitat_baselines.eval.extra_sim_sensors.third_rgb_sensor: third_rgb_sensor
- /habitat/simulator/agents@habitat.simulator.agents.main_agent: rgbdp_head_rgb_third_agent
- _self_
Motivation
Fine tuning new models with image input when ppo training, which is widely used I guess.
We do not render RGB as we are not using it in our baselines, and to avoid unnecessary rendering costs.
It made no change when I tried #493 by adding new line:
Did you also add it here: https://github.com/facebookresearch/habitat-lab/blob/45aa489a84a853cb10b9e7c87383262831a6bd22/habitat-lab/habitat/config/benchmark/ovmm/nav_to_obj.yaml#L13?
I added some keys at the position u mentioned, made it this way, and it worked.
# @package _global_
defaults:
- /habitat: habitat_config_base
- /habitat/simulator/agents@habitat.simulator.agents.main_agent: rgbdp_head_rgb_third_agent
- /habitat/task/ovmm: nav_to_obj
- /habitat/dataset/ovmm: hssd
- _self_
habitat:
gym:
obs_keys:
- head_depth
- head_rgb
- object_embedding
- ovmm_nav_goal_segmentation
- receptacle_segmentation
- start_receptacle
- robot_start_gps
- robot_start_compass
environment:
max_episode_steps: 400
simulator:
type: OVMMSim-v0
additional_object_paths:
- data/objects/train_val/amazon_berkeley/configs/
- data/objects/train_val/google_scanned/configs/
- data/objects/train_val/ai2thorhab/configs/objects/
- data/objects/train_val/hssd/configs/objects/
debug_render_goal: False
debug_render: False
concur_render: True
auto_sleep: True
requires_textures: False
kinematic_mode: True
agents:
main_agent:
radius: 0.3
height: 1.41
articulated_agent_urdf: data/robots/hab_stretch/urdf/hab_stretch.urdf
articulated_agent_type: "StretchRobot"
ik_arm_urdf: null
sim_sensors:
head_rgb_sensor:
height: 640
width: 480
hfov: 42
position: [0,1.31,0]
head_depth_sensor:
height: 640
width: 480
hfov: 42
position: [0,1.31,0]
head_panoptic_sensor:
height: 640
width: 480
hfov: 42
position: [0,1.31,0]
habitat_sim_v0:
allow_sliding: False
enable_physics: True
needs_markers: false