CUDA memory issue
Opened this issue · 1 comments
mokby commented
When training for about 10 epoch ,error occured, batch size is 2
CUDA out of memory. Tried to allocate 11.47 GiB (GPU 0; 11.99 GiB total capacity; 11.80 GiB already allocated; 0 bytes free; 23.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
When test, same error occured, batch size is 1
CUDA out of memory. Tried to allocate 11.47 GiB (GPU 0; 11.99 GiB total capacity; 11.80 GiB already allocated; 0 bytes free; 23.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
mokby commented
Solved, dataset problem.