AI4Finance-Foundation/FinRL-Tutorials

NeurIPS2018 no speedup with increasing batch size

Opened this issue · 2 comments

I tested demo NeurIPS2018 with stablebaseline3, I used SAC agent, and I trained with GPU. While I increase batch size from 128 to 512, I found no changing for GPU memories and utilization rate.

The version I used as below:
stable-baselines3==1.5.0
torch==1.10.0

Training time has no change with chaning batch size, what would be the problem?

It seems that SB3 is not optimized for GPU. If you are dealing with compute-intensive cases, ElegantRL may be a good choice.