yl4579/StyleTTS2

High-pitched noise in the background when using old GPUs

danielmsu opened this issue · 8 comments

Previously discussed here: #1 (comment)

The model produces some high-pitched noise in the background when I use my old GPU for inference (NVIDIA Quadro P5000, Driver Version: 515.105.01, CUDA Version: 11.7)

Audio examples:

I solved this problem by switching to CPU device, so this issue is just for reference, as asked by the author.

Thank you for your work!

yl4579 commented

I'm gonna pin this in case someone else has similar problems. I don't know how to deal with this because I can't reproduce this problem with the oldest GPU I have access now (GTX 1080 Ti).

The same problem happens to me, my GPU is A100. And using the CPU to infer cannot help for me, the noise is still there by using the CPU.

yl4579 commented

@ruby11dog could you please share the details to reproduce this problem? It seems it’s not related to the version of GPUs then?

@ruby11dog could you please share the details to reproduce this problem? It seems it’s not related to the version of GPUs then?

The noise will appear when inferencing with your pretrained model "epoch_2nd_00100.pth". But the noise seems to fade away with model i trained myself along with epoch increase in second stage.
Here is my related python package version: torch:2.1.0

yl4579 commented

So weird, I tried in Colab (T4, V100 and A100) without specifying any version of the libraries it works perfectly fine: https://colab.research.google.com/drive/1k5OqSp8a-x-27xlaWr2kZh_9F9aBh39K
I'm really wondering what is the reason behind this problem. It doesn't seem like it's just the GPU version though.

I have just ran into the same problem after training the model on the cloud using 4xA40 GPU's. I did inference both locally (RTX 3060 + CPU) and via the cloud computer (4xA40 + CPU). Doing inference on the cloud works fine without any background pitches. However, running it locally produces the background pitches (also when running the models on CPU, both on the cloud and locally).

After doing some investigation, it seems that the sampler is the culprit. Since it produces random output and my test is running either on the cloud or locally, it's not possible to set a seed producing similar outputs within the sampler.

A quick solution for testing the bug was to obtain the output of the sampler in the cloud and copy this output to my local pc. This output is then used at the local pc for further inference. Hereafter, the background pitch was gone and the sound was exactly produced as it should.

s_pred = sampler(noise, 
              embedding=bert_dur[0].unsqueeze(0), num_steps=diffusion_steps,
              embedding_scale=embedding_scale).squeeze(0)

# Save torch tensor on the cloud
# torch.save(s_pred, 'sampled_tensor.pt')

# Obtain the tensor via rsync and load it locally
# s_pred = torch.load('sampled_tensor.pt')

At the moment, I don't have time to look into the sampler, but I think closer inspection of the sampler could lead to fixing this bug.

After more investigation, I have solved the problem for myself. The problem for me was the sigma_data in the config. In the default config, it is set to 0.2. During training, this variable changes and is written to a new config file, which is stored in your Models folder. Doing inference with the default config file, I obtained the high pitch. Using the config file which is obtained during training, the pitch was gone and the sound was good. So this is the solution that works for me.

Btw, @yl4579 Thanks you for your great work, it's really awesome that you produced this and made your code open source!

As per my investigation the Problem is not with GPUs , its because of Diffusion Steps used in inferencing, reducing the diffusion steps removed the screeching noises completely. Enjoy Folks