Generated wave were empty
frozen-finger opened this issue · 8 comments
i have trained this for over 23k steps, but when using synthesis.py, the result seems empty. And i found that the generated mag to be normal. Can anyone tell me how to solve this problem?
Sorry this problem may seem stupid. But when i change is_training to True, there wasn't just silence. Although i still can not understand what it said. So, was it about batch normalization? @Kyubyong
You're going to need to train for at least 150,000 steps I'd imagine. See the pretrained models.
You're going to need to train for at least 150,000 steps I'd imagine. See the pretrained models.
Thank u for your advice. Can i know how many steps had you trained and its performance
I met this problem also... But even if I turn the is_training to True, the audio synthesized in synthesize mode is also far worse than in mode train.
@frozen-finger How did you solve this problem? Can you please explain?
The difference between the quality of audio generated during training and inference is because your model hasn't learned "attention". Make sure to look at the attention plots like the one here. If your model is learning attention, you should start to see a more or less diagonal line. This is also the reason why @nevercast suggested you train for many more steps. Most of my training sessions start producing decent attention plots around 60k steps.
If your dataset has empty spaces at the start or end of the audio files, trimming those would greatly help with this problem.
@TheNarrator Thanks for the response.
@nevercast @frozen-finger @candlewill @Kyubyong
The attention plots are looking diagonal after 50k steps. It seems the model has learned attention. But may be more steps needed I think.
There seems to be problem with predicted Mel(mel_hat) in synthesis.py, because I checked by providing the original Mel extracted from wavfile to mel_hat instead of predicting from the model, this is giving perfect result and it is sounding clean.
So, I thought that mel_hat prediction is going wrong. Will it improve after more steps?
@TheNarrator Thanks for the response.
@nevercast @frozen-finger @candlewill @Kyubyong
The attention plots are looking diagonal after 50k steps. It seems the model has learned attention. But may be more steps needed I think.There seems to be problem with predicted Mel(mel_hat) in synthesis.py, because I checked by providing the original Mel extracted from wavfile to mel_hat instead of predicting from the model, this is giving perfect result and it is sounding clean.
So, I thought that mel_hat prediction is going wrong. Will it improve after more steps?
I met the same problem as you, mel_gt&mag_gt is correct but mel_hat&mag_hat prediction goes wrong. And the audio synthesized is empty. Have you fix it?