Has anyone compared this training framework to TRL?
Opened this issue · 1 comments
StarrySeas1 commented
TRL PPO implementation is simpler than this, and takes up less memory. This framework has an additional value contribution network. I don't know which framework is more stable and effective.
refrain-wbh commented
While TRL indeed reduces one value function network, it may be relatively more challenging to train. That is because the policy and value function share parameters. On the other hand, the TRL library's code encapsulates a lot of optimizations, whereas our code has no additional optimization methods, making it easier to understand and modify.