train_tacotron.py giving error.
anjanakethineni opened this issue · 3 comments
When I am training train_tacotron.py it is giving this error:
Using device: cpu
Initialising Tacotron Model...
Trainable Parameters: 11.088M
Restoring from latest checkpoint...
Loading latest weights: /content/drive/My Drive/WaveRNN/checkpoints/ljspeech_lsa_smooth_attention.tacotron/latest_weights.pyt
Loading latest optimizer state: /content/drive/My Drive/WaveRNN/checkpoints/ljspeech_lsa_smooth_attention.tacotron/latest_optim.pyt
<utils.paths.Paths object at 0x7fbce6e350f0>
+----------------+------------+---------------+------------------+
| Steps with r=7 | Batch Size | Learning Rate | Outputs/Step (r) |
+----------------+------------+---------------+------------------+
| 10k Steps | 32 | 0.001 | 7 |
+----------------+------------+---------------+------------------+
error:
Traceback (most recent call last):
File "/content/drive/My Drive/WaveRNN/train_tacotron.py", line 202, in
main()
File "/content/drive/My Drive/WaveRNN/train_tacotron.py", line 98, in main
tts_train_loop(paths, model, optimizer, train_set, lr, training_steps, attn_example)
File "/content/drive/My Drive/WaveRNN/train_tacotron.py", line 126, in tts_train_loop
for i, (x, m, ids, _) in enumerate(train_set, 1):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 291, in iter
return _MultiProcessingDataLoaderIter(self)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 764, in init
self._try_put_index()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 994, in _try_put_index
index = self._next_index()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 357, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py", line 208, in iter
for idx in self.sampler:
File "/content/drive/My Drive/WaveRNN/utils/dataset.py", line 212, in iter
binned_idx = np.stack(bins).reshape(-1)
File "<array_function internals>", line 6, in stack
File "/usr/local/lib/python3.6/dist-packages/numpy/core/shape_base.py", line 422, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack
Probably bad path to your dataset because it is trying to read empty file. Double check you dataset and options.
I have this same issue, did you manage to fix it?
@Covac I think my path is good, python preprocess.py
generated the data without issue.
Turns out I was getting this issue because my input size was too small. If your input size is less than a certain number, I think it was 96 or 128 or so, then batch_size or something becomes 0, causing this issue. Just make sure you have enough audio files.