kengz/SLM-Lab

Need help in understanding the reinforce algorithm implementation in the foundations of deep RL book

vvivek921 opened this issue · 2 comments

Hi Wah Loon/ Laura,
I have started reading your book on deep RL and have enjoyed reading it so far.
Apologies for asking my question here. But I couldn't think of a better place to post this question. Please let me know if there is a discussion forum for this book, where I can ask questions going forward.
This question is related to the first standalone torch implementation of reinforce algorithm given in the book.
What I need help in understanding is:
As the criterion decreases, reward should increase.
However when I run the code, I observe that reward increases with increasing criterion.

Criterion(loss) is defined as
loss = - log_probs * rets # gradient term; Negative for maximizing
Because of negative sign, isn't a lower criterion(loss) better? But the results are contradictory.

Episode 0, loss: 240.3678741455078, total_reward: 27.0, solved: False
Episode 1, loss: 134.7480926513672, total_reward: 20.0, solved: False
Episode 2, loss: 47.81584930419922, total_reward: 12.0, solved: False
Episode 3, loss: 38.16853713989258, total_reward: 11.0, solved: False
Episode 4, loss: 130.42645263671875, total_reward: 20.0, solved: False
Episode 5, loss: 48.20455551147461, total_reward: 13.0, solved: False

...

Episode 295, loss: 6347.0849609375, total_reward: 200.0, solved: True !!!!!!
Episode 296, loss: 316.5134582519531, total_reward: 37.0, solved: False
Episode 297, loss: 6321.185546875, total_reward: 200.0, solved: True !!!!!!
Episode 298, loss: 6334.77197265625, total_reward: 200.0, solved: True !!!!!!
Episode 299, loss: 6197.91259765625, total_reward: 200.0, solved: True !!!!!!

kengz commented

Hi @vvivek921, glad that you're enjoying the book! This is definitely the right place to ask, and it would also help other readers/users who might have the same question.

For your question, loss could start out being underestimated, hence the low initial value, since it depends on the initial values of the policy network which yields log_probs. If the network is still learning, the loss will keep changing as a result of minimizing and correcting (so it may increase if it's initial value was underestimated), as well as because of the changing returns, until it eventually converges to a relatively stable final value.

This is also an interesting observation: in deep RL, the loss curve isn't always reliable for reasons like this, and it is partly why debugging can be tricky! Granted, losses in RL is quite different than losses in supervised learning.

If you look at the loss graph (inside the ./data/{your_experiment_folder}/graph/ folder), you would see graph that is quite noisy, like the one below (which has a different scale because it's using a different advantage baseline). It is especially harder to tell also because the environment is so simple and can be solved in such a short timescale.

reinforce_cartpole_t0_s1_session_graph_train_loss_vs_frame

For a harder environment which requires a longer time scale to solve, such as an Atari game, we can more clearly observe the more pronounced changes (decreasing) of the loss over a much longer time, like the one below (PPO on Atari Pong, max total reward = 21) shown with its return graph. Note that even here it still increases slightly toward the end.

Hope this helps!

ppo_pong_t0_s0_session_graph_eval_loss_vs_frame
ppo_pong_t0_s0_session_graph_eval_mean_returns_ma_vs_frames