Data-Science-kosta/Speech-Emotion-Classification-with-PyTorch

RuntimeError: CUDA out of memory

ucargokhan opened this issue · 1 comments

When training the model, the issue below happens:

`Selected device is cuda

RuntimeError Traceback (most recent call last)
in
4 device = 'cuda' if torch.cuda.is_available() else 'cpu'
5 print('Selected device is {}'.format(device))
----> 6 model = HybridModel(num_emotions=len(EMOTIONS)).to(device)
7 print('Number of trainable params: ',sum(p.numel() for p in model.parameters()))
8 OPTIMIZER = torch.optim.SGD(model.parameters(),lr=0.01, weight_decay=1e-3, momentum=0.8)

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in to(self, *args, **kwargs)
671 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
672
--> 673 return self._apply(convert)
674
675 def register_backward_hook(

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
385 def _apply(self, fn):
386 for module in self.children():
--> 387 module._apply(fn)
388
389 def compute_should_use_set_data(tensor, tensor_applied):

~\anaconda3\lib\site-packages\torch\nn\modules\rnn.py in _apply(self, fn)
177
178 def _apply(self, fn):
--> 179 ret = super(RNNBase, self)._apply(fn)
180
181 # Resets _flat_weights

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn)
407 # with torch.no_grad():
408 with torch.no_grad():
--> 409 param_applied = fn(param)
410 should_use_set_data = compute_should_use_set_data(param, param_applied)
411 if should_use_set_data:

~\anaconda3\lib\site-packages\torch\nn\modules\module.py in convert(t)
669 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
670 non_blocking, memory_format=convert_to_format)
--> 671 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
672
673 return self._apply(convert)

RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 2.00 GiB total capacity; 1.08 GiB already allocated; 1.44 MiB free; 1.11 GiB reserved in total by PyTorch)`

Decreasing the batch size doesn't help.

Your GPU has only 2 GB of memory, try running it on Kaggle's GPU