davabase/whisper_real_time

torch.cuda.OutOfMemoryError

yangxiongj opened this issue · 1 comments

Traceback (most recent call last):
File "transcribe_demo.py", line 151, in
main()
File "transcribe_demo.py", line 69, in main
audio_model = whisper.load_model(model)
File "D:\anaconda3\envs\whisperTime\lib\site-packages\whisper_init_.py", line 122, in load_model
return model.to(device)
File "D:\anaconda3\envs\whisperTime\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "D:\anaconda3\envs\whisperTime\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\anaconda3\envs\whisperTime\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\anaconda3\envs\whisperTime\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\anaconda3\envs\whisperTime\lib\site-packages\torch\nn\modules\module.py", line 664, in _apply
param_applied = fn(param)
File "D:\anaconda3\envs\whisperTime\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 8.00 GiB total capacity; 6.50 GiB already allocated; 0 bytes free; 6.83 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for M
emory Management and PYTORCH_CUDA_ALLOC_CONF

USE medium.pt