opendilab/LightZero

How to use efficient zero for board games

drblallo opened this issue · 5 comments

Hi,

I am trying to understand how to use the LightZero framework, i have been able to use both alphazero and muzero to run tictactoe and efficient zero to run memory, but i don't understand how one is supposed use efficient zero for board games.

I tried to edit the configuration file of memory trying to port the configurations from the one of tic tac toe, but the program fails in various ways.

Is there anything fundamental that prevent from using the tic tac toe env with the efficient zero algorithm, or it is just a issue of understanding the exact impact of all configurations parameters within a configuration file?

Greetings, EfficientZero is primarily designed to enhance sample efficiency in environments with image-based inputs. As board games typically do not rely on image inputs, the performance gains from employing EfficientZero in such contexts might not be particularly significant, which is why we have not previously provided configurations for board games. However, if you are interested in exploring the performance of EfficientZero in board game settings, we have now provided a configuration example for TicTacToe in #204. Should you have any questions or wish to engage in further discussion, please feel free to reach out to us at any time.

i see, if i can manage to collect some data, i will share them.

in the meantime, i have been trying to understand various configurations, i am not sure this is a bug, but it does look like one to me.

target_reward_categorical = phi_transform(self.reward_support, transformed_target_reward)

phi_transoforms are applied regardless of muzero being initialized with categorical rewards or not. this entails that when muzero is used categorical rewards turned off, it fails the reward. I did so by passing categorical_distribution=False to the model dict in the config file of bot tictactoe.

Screenshot_2024-03-27_18-11-04

maybe i am missing something about how to use them. It is unclear to me why one should prefer categorical rewards when using a single float as a reward.

furthemore, i tried change this line of code into the tictactoe env

reward = np.array(float(winner == self.current_player)).astype(np.float32)

to

reward = np.array(float(winner == -1)).astype(np.float32)

with the intention of seeing how long would it take to muzero to learn to aim for always drawing the game, in the vs bot version of the setup.

when i did so, it learned something, but after 153.000 steps and 90 minutes of work it did not managed perfectly learn to do so. It this intended? I understand that muzero is a complex model, but this should not be particularly harder to learn than the always winning version.

Screenshot_2024-03-27_21-10-35

Screenshot_2024-03-27_21-16-18

maybe i am missing something about how to use them. It is unclear to me why one should prefer categorical rewards when using a single float as a reward.

You can find a detailed analysis in the following papers: "Improving Regression Performance with Distributional Losses" (ICML 2018), "Observe and Look Further: Achieving Consistent Performance on Atari" (2018), and "Stop Regressing: Training Value Functions via Classification for Scalable Deep RL" (2024). These studies indicate that the primary advantage of adopting a categorical distribution is the ability to maintain more stable gradients in the face of noisy target variables and non-constant characteristics. Such stability is a key factor for performance and scalability, which is why LightZero has this option enabled by default.

but this should not be particularly harder to learn than the always winning version.

Hello, could you please provide the configuration file for your agent as well as the complete TensorBoard log files? This would be beneficial for our in-depth analysis. Additionally, it is advisable to save some replay data from the training process, so that we can observe the learning behaviors and evolution of the agent.