facebookresearch/mbrl-lib

Using Wrapper Class for Custom GYM Env

MishraIN opened this issue · 7 comments

I have a custom open AI gym env and I am trying to use mbrl wrapper but getting error name 'model_env_args' is not defined. I am trying to follow example here, https://arxiv.org/pdf/2104.10159.pdf. Here's my code.

import gym import mbrl.models as models import numpy as np net = models.GaussianMLP(in_size=14, out_size=12, device="cpu") wrapper = models.OneDTransitionRewardModel(net, target_is_delta=True, learned_rewards=True) model_env = models.ModelEnv(wrapper, *model_env_args, term_fn=hopper)

Hi @MishraIN. Apologies, the paper is a bit misleading in this. I think the best would be to look at an example in one of the algorithm implementations, such as MBPO, or Planet.

The signature of ModelEnv's constructor is described here.

Let me know if you have additional questions.

Thanks Luisenp! I am trying to use custom env built using gym instead of mbrl.env.cartpole_continuous. My action space is Box(1,) and observation space is Box(600, 800, 3).

I am running into so many errors trying to use custom env. How can use my custom env with MBRL?

For that type of observation space, following the PlaNet example would be the most appropriate. Do you have any code samples I can take a look at?

Thanks for your prompt reply! I am trying to copy mbrl.env.cartpole_continuous for the ChopperScape game.
MBRL_Code.zip

It would be much better if you submit a pull request that has your script. We don't need to merge it, but it will make review and discussion much easier.

Here's the zip file attached. I am really new to this model based RL and barely understand the code, please pardon my ignorance.
MBRL-3.7.zip

Hi @MishraIN, as I mentioned above, the proper mechanism to do this would be to start a pull request from your fork of the repository. Without one, I'm afraid I won't be able to help you.