seungeunrho/minimalRL

Query about LSTM

npitsillos opened this issue · 0 comments

Hello, nice and clear implementation! I want to ask something about the LSTM usage. While gatthering experience the input to the LSTM is of dimension [1, 1, 64] which represents 1 timestep of 1 episode along with the 64 FC features?

Also when training on a batch you sample this size eg [20, 1, 64] which corresponds to 20 timesteps?

Finally, shouldn't the hidden state be of the same dimensions except the last? Correspond to the timestep dimension for example? What is the best way to handle using an LSTM is it just an implementation choice?