Logging in environments
Closed this issue · 3 comments
I created an environment for a new robot in a repository derived from the IssacGymEnvs preview release (https://developer.nvidia.com/isaac-gym).
I would like to log different parts of the reward function of the environment to see what the neural network optimizes first. For this I would need to either create a new torch.utils.tensorboard.SummaryWriter, or use the existing one from the A2CBase. What is the best way to log scalar values from the environment?
Hi, could you take a look at AlgoObserver:
https://github.com/NVIDIA-Omniverse/IsaacGymEnvs/blob/3c75f71f463e840c51eebcf3db2a18267d45f188/isaacgymenvs/utils/rlgames_utils.py#L101
Here you can print additional stats.
@TobiasJacob Also there is second option.
rl_games supports multi head values.
you can set self.value_size=N and in get env info set info['value_size'] = self.value_size
in this case you'll need to return a vector of rewards. It will be automatically reported to the tensor board. And critic network will have output shape N instead of 1.
Thanks a lot for the answer, I was in winter-break so far but I will try it out!