tspooner/rsrl

How to write custom environments for training on a custom problem?

Opened this issue · 3 comments

First of all I would like to say that this is a great library for reinforcement learning. Thanks for working on this. The examples are also great. But what I felt reading the examples is that I dont have clarity on how to use rsrl for custom environments and problems. Could you please shed some light on this. Thanks for the great work.

Hey,

Thanks for the feedback. It's great to hear that your interested in using the library - I apologise for it not being very obvious how to use. The documentation is not great yet because I've just been focusing on development of features and using it in my own research.

I have plans for how to make all of this a lot easier. These features are in active development and some examples of one particular application will soon be uploaded to my repo. These will probably help with understanding until I finish writing the documentation.

I will do my best to give you a proper explanation as soon as my current deadlines are over. Sorry for the delay.

Tom

Any updates on this front?

Hey, so in short you need to implement the Domain trait. See for example the domains implemented in the framework. These will help. I realise that's not much information, but I am incredibly close to publishing a new totally revised framework. This will change most of the interfaces, and include a dedicated rsrl_domains sub-crate. This will have more information to help. Keep an eye out over the next week or two!

Cheers,
Tom