idiap/coqui-ai-TTS

[Bug] Training XTTSv2 leads to weird training lags

NikitaKononov opened this issue · 1 comments

Describe the bug

Hello, training XTTSv2 leads to weird training lags - training gets stuck with no errors

with using DDP
x6 RTX a6000 and 512GB RAM
Here is monitoring GPU load graph. Purple - gpu0, green - gpu1 (all the rest GPUs behave like gpu1)
image

Without DDP
image

Tried different dataset sizes - 2500hrs, 250hrs - result remains the same

I think there's some kind of error in Trainer or in xtts scripts maybe, don't know where to dig, thank you
no swap memory usage, no cpu overloading, no RAM overloading (by clearml, htop and top at least)
disk is fast NVME

To Reproduce

python -m trainer.distribute --script recipes/ljspeech/xtts_v2/train_gpt_xtts.py --gpus 0,1,2,3,4,5
python -m trainer.distribute --script recipes/ljspeech/xtts_v2/train_gpt_xtts.py --gpus 0,1
python3 recipes/ljspeech/xtts_v2/train_gpt_xtts.py

Expected behavior

No response

Logs

No response

Environment

TTS 0.24.1
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03              Driver Version: 535.54.03    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A6000               On  | 00000000:01:00.0 Off |                  Off |
| 46%   70C    P2             229W / 300W |  32382MiB / 49140MiB |     91%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA RTX A6000               On  | 00000000:25:00.0 Off |                  Off |
| 42%   68C    P2             246W / 300W |  27696MiB / 49140MiB |     77%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA RTX A6000               On  | 00000000:41:00.0 Off |                  Off |
| 38%   67C    P2             256W / 300W |  27640MiB / 49140MiB |     63%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA RTX A6000               On  | 00000000:81:00.0 Off |                  Off |
| 39%   67C    P2             245W / 300W |  27640MiB / 49140MiB |     67%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA RTX A6000               On  | 00000000:A1:00.0 Off |                  Off |
| 46%   70C    P2             239W / 300W |  27620MiB / 49140MiB |     66%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA RTX A6000               On  | 00000000:C2:00.0 Off |                  Off |
| 30%   31C    P8              17W / 300W |      3MiB / 49140MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

Additional context

No response

tried num_workers=0, >0, MP_THREADS_NUM and so on, nothing helps
lots of ram and shared memory