XiangLi1999/Diffusion-LM

Do we need to scale word embeddings to [-1, 1]?

tj-zhu opened this issue · 3 comments

Hi there, thank you very much for providing the code!

I am new to diffusion model, so I apologize in advance if I ask a dumb question.

In this line, it seems we are getting word embeddings and adding noise directly to it, without making sure word embeddings are between [-1, 1].

In DDPM, we need to scale image to [-1, 1] for parameters in noise scheduler to work properly.

I am wondering how we control the scale in text.

Thank you very much!

Hi,

Thanks for the question. We are not mapping the word embeddings to be between [-1, 1], and this is different from image diffusions.

There are three terms in the objective: (1) Lsimple (mse), (2) the reconstruction (i.e. decoder_nll) (3) the prior (t_T_loss) as in

terms["loss"] = terms["mse"] + (decoder_nll + tT_loss)
. Term (2) prevents norm from being too small, term (3) prevents the norm from being too large.

Hope this helps!

Yes this explains it! Thank you very much for the quick response and the great explanation!

Hi @XiangLi1999, I am sorry for reopening the issue. I just have one more question about the loss function.

Can I ask why in decoder_nll loss, we input x_start instead of the predicted x_start?
You mentioned the decoder_nll is to prevent word embeddings being too small. I assume it's because if the word embeddings are too small, the noise will take dominance, and it will be difficult for model to denoise, then the reconstruction loss will be high? Please correct me if I am wrong.

If that's the purpose of this reconstruction loss, then we need to use the predicted x_start (the denoised version) to calculate reconstruction loss, right?

Sorry if the answer seems obvious but I didn't get it. Thank you very much for your help!

decoder_nll = self.token_discrete_loss(x_start, get_logits, input_ids)