Increasing GPU memory usage when online finetuning
guosyjlu opened this issue · 0 comments
guosyjlu commented
Hi, thanks for your great job in this implementation. When I use this codebase, I find that the GPU memory usage increases from ~4000 MiB (offline pretraining) to ~11000 MiB (online finetuning). Do you have any idea about this phenomenon?
For offline pretraining:
python experiments.py --env hopper --dataset medium-replay
For online fientuning:
python experiments.py --env hopper --dataset medium-replay --online_training