cuda out of memory
Closed this issue · 1 comments
qgh1223 commented
I met cuda out of memory when using 2080 with 11G memory.Is there any way to solve this problem?
huanglianghua commented
Hi, as a reference, we executed our experiments on 4 TitanX GPUs with 12G memory each, and the training costs around 8~10G memory per GPU.
If the out-of-memory error still occurs, you can use a smaller max_instances
in configs/qg_rcnn_r50_fpn.py
, e.g., from 8
to 6
.
Alternatively, you can also add a --fp16
flag in your training script, which will however slightly reduce the performance by ~1%
.