Kyubyong/dc_tts

How to do the fine-tuning training?

Closed this issue · 6 comments

Used the seed model as base for a different voice, but the output still sounds like LJ.
I think I'm missing some steps here. @Kyubyong said to adjust the hyperparameters, but I'm not sure exactly what to do beyond the obvious steps.

Here's what I did:
Since the batch size is 32, and the author claims to have augmented the model with a minute of voice data, I used 32 voice samples for my second voice.
I edited hyperparameters.py to reflect the new data location and train.py to save the model after just one step. I also deleted the mel and mag folders just in case.
Then I ran prepo.py, train.py 1 and train.py 2. I then ran synthesize.py and the output sounds like LJ.

Help?

Hi,

-If your goal is to have the best results possible with your second voice, and if you have more samples, use them all.
-The batch size is the number of samples you use to compute each step. Let it be 32.
-I think you should train it a lot longer. One step will just modify a tiny bit your model parameters. Try training it 100k steps
-You are right to delete mel and mags. For your finetuning, you should use only YOUR samples. So check after preprocessing if the files that are in mels and mags have the same name as you files (except for the extension)
-Then train.py 1 (you should listen to samples just after doing this, it may already sound good if your second voice is not
-And potentially train.py 2

Noé

Hi Noé! Thank you for your reply!
My goal is to have a somewhat recognizable voice with very little data, as Kyubyong's results here: (https://github.com/Kyubyong/speaker_adapted_tts). He said "I use only one minute of the speech samples. The following are the synthesized samples after 10 minutes of fine-tuning training." This 10 minutes of fine-tuning training is what I'm trying to figure out how to do.

I've been producing my new voice/transcript samples by hand. Generating enough for 100k steps would take forever and kind of defeat the purpose of the seed model, as far as I understand. Or do you mean that I can train it with my same single batch for 100k steps?

ok I see, so your goal is more to have good enough results with very few samples.

So indeed 100k step is going to be too much I guess.

But yes, in that case, I think you should do several steps with your same single batch. In fact in your case one step equals one epoch because one batch contains all your samples. The number of epochs is the number of times your system uses the data during training.

It is hard to tell how much steps you should do. So I would suggest you to try several order of magnitudes (e.g. 1, 5, 10, 50, 100, 500, 1k etc.). You could just run it for some time and doing a backup for these specific steps by replacing this in train.py:

if gs % 1000 == 0:
    sv.saver.save(sess, logdir + '/model_gs_{}'.format(str(gs // 1000).zfill(3) + "k"))

with something like this:

if gs % 10 == 0:
    sv.saver.save(sess, logdir + '/model_gs_{}'.format(str(gs).zfill(6)))
    import os, glob
    from shutil import copy

    if not os.path.exists(logdir+'/backups/'):
        os.makedirs(logdir+'/backups/')

    if np.log10(gs) % 1 == 0 or np.log10(gs/5)%1==0:
        for file in glob.glob(logdir + '/model_gs_{}'.format(str(gs).zfill(6))+'*'):
            print(file)
            copy(file, logdir+'/backups/')

Hi again!
I got a recognizable voice results with setting learning rate to 0.01 and running training for 100 epochs. The audio still has a lot of weird artifacts (for example, my guy added "peng" after the last word in a sentence XD), but I definitely got some progress :D
Also, running train.py 2 resulted in nothing but noise being generated, so I just did train.py 1 to get my results.
Thanks for joining me on this adventure. It's been fun :)

Hi @saya1984 ,I have problem in synthesize,The error is:

 File "synthesize.py", line 28, in synthesize
    saver1.restore(sess, tf.train.latest_checkpoint(hp.logdir + "-1"))
  File "./python3.6/site-packages/tensorflow/python/training/saver.py", line 1534, in restore
    raise ValueError("Can't load save_path when it is None.")
ValueError: Can't load save_path when it is None.

How can i Fix it

ValueError: Can't load save_path when it is None.
I also have this error