opendilab/LightZero

Question: How can I set up a custom environment?

lunathanael opened this issue · 3 comments

Hello,
I came across this repository and was wondering what the steps and requirements would be to set up a custom environment and utilize the algorithm. Specifically, for example, a gym environment will require which functions?
Thanks

Greetings,

We have prepared documentation on how to customize environments and algorithms within the LightZero framework, which can be accessed through the following links:

Although these documents provide fundamental guidance, they may not encompass all details. Should you encounter any issues or have queries during the customization process, please do not hesitate to reach out to us. We are eager to assist you in ensuring a smooth customization experience.

Best wishes!

Thanks for getting back to me!
I appreciate the fast response and clear instructions. Another question I had, taking a loom at the documentation suggest two modes, board game and non board game atari based. Would it be possible to implement an environment that is two players but allows all actions as legal? I suppose I could simply pass all moves as legal. Adding onto this, could I encode a no-move as a 0 embedded plane? Do you have any suggestions for algorithms that are self-play but less data hungry?
I understand my questions are more beginner, and am asking for your instruction.

Thank you for the clarifiaction!

Certainly, an environment that allows two players to make any legal moves can be created. In the field of Multi-Agent Reinforcement Learning (MARL), such types of environments are quite common. For example, the PettingZoo library provides many examples of such environments. You can browse PettingZoo's GitHub repository for more related information. In the LightZero project, we have some ongoing pull requests, such as PR#149, PR#153, and PR#171. You can follow the updates on these pull requests, or contribute your own insights.

Regarding the encoding of "no action" as an embedding vector with a value of 0, this is technically feasible. However, this requires that the design of the environment explicitly clarifies how to interpret this condition and how to enable agents to recognize and learn a "no action" strategy.

For self-play algorithms seeking data efficiency, you might refer to research papers that focus on data-efficient reinforcement learning. For instance, data utilization can be enhanced through methods like model-based reinforcement learning, representation learning, and so on. Relevant resources can be found on awesome-model-based-RL.