JingwenWang95/go-surf

CUDA out of memory when running reconstruction

botaoye opened this issue · 1 comments

Hi, thanks for your great work. I am running the code with RTX2080Ti (11GB memory), when running reconstruction, the constructed unit_grid already exceeds the GPU memory limitation. Do you have any suggestions to run it on RTX2080Ti?

unit_grid = torch.stack(torch.meshgrid(torch.linspace(-1, 1, nx),

Hi @botaoye, that's weird as we also did experiments on 2080ti, so it shoudn't happen... Can you try reducing reconstruct_upsample in the config file to see if you can get something at least? You should be able to find it in base.yaml Unfortunately I'm away for a few days, I will have a look when I'm back on Friday.