NVIDIA/mellotron

number of epoch issue

yuzuda283 opened this issue · 1 comments

The default epoch number in hparams.py is 50000, does that really need so much epoches? because i can't use the distributed run and fp16 run, and i try to use nn.DataParallel() to use multi-gpu, but bugs happened. so if i use one P100 to train the model on 10000 speeh , it may take 200 days with 50000 epoches!

How many epochs did you give?