z-fabian/HUMUS-Net

About GPU memory

Closed this issue · 3 comments

hi! I tried to run your code on double GPU, RTX3090(24G), but it didn't work, the error message showed "cuda:out of memory".

Hi,

Can you please let me know what commands you used to run the code? Make sure that the use_checkpointing flag is set to reduce GPU memory. You can see all the hyperparameters and arguments when the code starts running as well. Otherwise, GPU memory shouldn't be an issue for the default models, we also used 24GB GPUs for the experiments.

Hi @Aristot1e, did you manage to run the code? Please let me know if there is still some issue.

I have solved this problem, and running the code successfully. Thank you so much.