The training step of CQL-SAC.
DooyoungH opened this issue · 1 comments
DooyoungH commented
I am studying by referring to your CQL code.
But, I think Line 68 not be proper to Offline RL when I run the train.py of CQL-SAC.
Line 68 : buffer.add(state, action, reward, next_state, done)
Isn't this line an off-policy model by putting data that interacts with the agent and the environment into a buffer?
I thank you for your hard works.
BY571 commented
Yes, indeed this is only for the RL setting for an SL setting or BatchRL setting you might have to adapt that