NVIDIA/mellotron

inference speed on CPU

Adibian opened this issue · 0 comments

Hi.
I am exploring about speed of training and inference different multi speaker TTS models on single CPU or on singe GPU.
Thanks for any explanation in this case for current model or any other models of multi speaker TTS.