AI4Bharat/Indic-TTS

Model Loading & Inference Time

Closed this issue · 1 comments

Hello Team,

Recently, I was able to setup IndicTTS in our A100 GPU instance. What I observed was that the model loading time is ~12 minutes when I used the flag use_cuda=True, which is quite huge.

When I disabled GPU with use_cuda=False model is loading very fast 1592.0169 ms but inference time is very high.

Looks like I am missing out on something can anyone help me find out what I am doing wrong to fix this time issue.

Thanks

Problem is with GPU