Curt-Park/rainbow-is-all-you-need
Rainbow is all you need! A step-by-step tutorial from DQN to Rainbow
Jupyter NotebookMIT
Issues
- 0
- 0
Test time action selection
#72 opened by nil123532 - 0
Not handling time limits
#70 opened by carlos-UPC-AI - 2
Save/Load capabilities
#61 opened by chensh3 - 2
clear momory during n_step_learning
#54 opened by PigUnderRoof - 2
bias_sigma initialization in noisy net
#48 opened by kentropy - 2
- 4
Atari
#45 opened by abbas-tari - 2
Update frequency/method and warm-up period
#52 opened by wuxmax - 2
- 4
V_min and V_max - Rainbow DQN
#50 opened by AndreasKaratzas - 5
Save memory checkpoints
#49 opened by AndreasKaratzas - 1
Atari
#44 opened by abbas-tari - 3
- 1
Update torch, numpy version
#40 opened by MrSyee - 0
What your version of segment_tree
#41 opened by SoarAnyway - 1
Categorical DQN parameters for Acrobot
#39 opened - 2
"indices" in the N-step ReplayBuffer undefined
#37 opened by qiyang77 - 1
input state-action pair into Rainbow DQN
#35 opened by junhuang-ifast - 2
Running on Atari Games
#36 opened by FarhaParveen919 - 2
Google Drive ,Saving,Loading,Resuming Features.
#34 opened by kiranmaya - 4
redundant max in double dqn
#25 opened by DongukJu - 1
Modify the description of N-step buffer
#22 opened by MrSyee - 2
- 1
There is a typo in N-step ReplayBuffer
#19 opened by mclearning2 - 7
Assertion error when calculating loss
#17 opened by signalprime - 1
Atari env
#14 opened by mamengyiyi - 4
Some questions on the N-step ReplayBuffer
#10 opened by ty2000 - 2
N-step ReplayBuffer's store use the wrong act?
#8 opened by ty2000 - 1
NBViewer link for double DQN doesn't work
#7 opened by ranran9991 - 4
Add a new contributor
#4 opened by Curt-Park - 6
Add contributors
#1 opened by Curt-Park