kengz/SLM-Lab

how to improve the convergence performance of training loss?

williamyuanv0 opened this issue · 0 comments

Hi kengz, I find that the convergence performance of training loss (=value loss+policy loss) of ppo algorithem applied in game pong is poor (see Fig.1), but the corresponding mean_returns shows a good upward trend and reaches convergence (see Fig.2).
That is why? how to improve the convergence performance of training loss? I tried many imporved tricks with ppo, but none of them worked.
ppo_pong_t0_s0_session_graph_eval_loss_vs_frame
Fig.1
ppo_pong_t0_s0_session_graph_eval_mean_returns_vs_frames
Fig.2