Denys88/rl_games

some suggestions for rl-games

Opened this issue · 1 comments

  • Allow masking of some environments to allow for validation of the policies at outside the DR ranges they are trained on.
  • Dropout is not supported yet but it would be nice to have. In general, it would be nice to have a support for arbitrary networks to be plugged in without going through YAML etc. or changing rl-games code in any way.
  • Allow for changing the LSTM states outside rl-games if possible. We may want to corrupt LSTM states on the fly as another adversarial perturbation to make the policies robust to this.
  • Allow test=True with a checkpoint. @ArthurAllshire has already done it but I think it would be good to have that in the same wrapper. Should be pretty straightforward and will make our lives very easy.
  • unit tests for single-gpu / multi-gpu implementations, checking memory limits etc.

thanks @ankurhanda and @ArthurAllshire for the feedbacks.
I finally got free weekend :)

  1. Masking - will be implemented
  2. what do you mean by arbitrary nerworks? It is possible to create custom neural network right now. Here is example in my custom branch with transformers and neural network from the OpenAI paper: https://github.com/Denys88/IsaacGymEnvs/blob/main/isaacgymenvs/learning/networks/ig_networks.py
  3. We can try but not sure it will work. I found that it is very hard to learn something meaningful in LSTM and if we corrupt it I believe we will just help NN to learn to ignore states.
  4. Could you clarify. It might be more IG side issue.
  5. I have plans to implement unittests. Also it would be nice to see more tests in IG too :)