CUDA out of memory. how to change GPU, I want to specify a GPU device
yuheyuan opened this issue · 2 comments
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.70 GiB total capacity; 1.33 GiB already allocated; 5.00 MiB free; 1.40 GiB reserved in total by PyTorch)
When I run daformer,It's Ok.
But , I run HRDA, it occour CUDA out of memory.
I want to change GPU 0 to GPU 1
But I don't know how to change it.
Usually, I use code to specify GPU by code
import os
os.environ["CUDA_VISIBLE_DEVICES"] = '1'
But it dosen't work in this work.
I find code in your gtaHR2csHR_hrda.py
n_gpus = 1
gpu_model = 'NVIDIATITANRTX'
is this gpu_model should change?
My gpus are two 3090.
So I want to know how to change GPU in this code. defualt is GPU 0
Or how to change configs to make the code successfully.
Maybe GPU 1 is used, I specify GPU 1, but in pytorch the index of GPU 1 become GPU 0.Then it occour this problem.
So, I want to know if 3090 can run this code. Or change the configs to make this run.
The flags n_gpus
and gpu_model
are for internal purposes only and have no functionality in this repository. I forgot to remove them.
However, you can specify the GPU by setting cfg['gpu_ids']
.