CUDA out of memory
Opened this issue · 1 comments
MyFirstKindom commented
Wrong: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.69 GiB total capacity; 22.50 GiB already allocated; 115.88 MiB free; 22.53 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
"Is the GPU memory of this size still insufficient to support the training of CIFAR-10?"
Yanqing0327 commented
Hello, I think the problem lies in the fact that there is no need to calculate the gradient when extracting features through clustering. Maybe you can modify this to improve it. I will also update the code within the next week.