训练报错,是显卡问题吗?
Closed this issue · 4 comments
这是显卡配置
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\multiprocessing\spawn.py", line 59, in _wrap
fn(i, *args)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\train.py", line 108, in run
train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler,
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\train.py", line 146, in train_and_evaluate
(z, z_p, m_p, logs_p, m_q, logs_q) = net_g(c, f0, spec, g=g, mel=mel)
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\parallel\distributed.py", line 886, in forward
output = self.module(*inputs[0], **kwargs[0])
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1102, in call_impl
return forward_call(*input, **kwargs)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\models.py", line 330, in forward
z_ptemp, m_p, logs_p, _ = self.enc_p(c, c_lengths, f0=f0_to_coarse(f0))
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1102, in call_impl
return forward_call(*input, **kwargs)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\models.py", line 119, in forward
x = self.enc(x * x_mask, x_mask)
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\attentions.py", line 39, in forward
y = self.attn_layers[i](x, x, attn_mask)
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\attentions.py", line 143, in forward
x, self.attn = self.attention(q, k, v, mask=attn_mask)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\attentions.py", line 175, in attention
relative_weights = self._absolute_position_to_relative_position(p_attn)
File "D:\AI cover\so-vits-svc-main\so-vits-svc-main\attentions.py", line 241, in _absolute_position_to_relative_position
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\nn\functional.py", line 4174, in _pad
return _VF.constant_pad_nd(input, pad, value)
RuntimeError: CUDA out of memory. Tried to allocate 28.00 MiB (GPU 0; 2.00 GiB total capacity; 1.70 GiB already allocated; 0 bytes free; 1.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
2G显存太小了
2G显存太小了
要多大才够
至少8g吧,2g显存估计batch调成2都进不去
好像可以租显卡