Ghadjeres/DeepBach

Training epochs

Closed this issue · 2 comments

I have implemented some code to update the best model so far if the validation loss at the end of an epoch is lower than the previous epoch. Even so, I was wondering about the number of epochs that you trained the original DeepBach model on, so that we make sure that we can replicate your results.

Thank you!

Hello,
I would not be able to remind this :) but training one VoiceModel should not take so long (one or two hours).
The Pytorch code was not optimized at all and was just there for people to have a clear implementation of the model to build upon. But it is still satisfactory when used in NONOTO like in this video https://drive.google.com/open?id=12OS-eYg34EDu2T4D97I2zrzG_tHpzRjO (even if you'll notice some writing mistakes ;) ). The model I used for this video is the one in dockerhub https://hub.docker.com/r/ghadjeres/deepbach

Thank you so much for the response!!

By saving the model with the lowest validation loss at the end of an epoch, we ended up with Voice 0 trained training for 6 epochs, Voice 1 trained for 7 epochs, Voice 2 trained for 5 epochs, and Voice 3 trained for 6 epochs. The entire process took about 3 hours on GPU. The results seem comparable to the ones from the original model (including the generation you just linked me to), so I am okay with this for now! Thanks again!