Fine Tuning with Freezing Layers With Smaller Learning Rate?
chigkim opened this issue · 0 comments
chigkim commented
Is there a recipe to fine tune with freezing beginning layers and only train the last few layers with smaller learning rate?
If I just use train_yourtts.py with setting RESTORE_PATH, doesn't it train the all the layers?