XiangLi1999/Diffusion-LM

How did you derive your sampling algo?

jzhang38 opened this issue · 4 comments

Hi Lisa,

Thanks for your wonderful work.

May I ask how did you derive the sampling algo mathmatically for x_0 prediction? (I am looking for the sort of proof given in DDPM regarding the e-prediction)

This is actually quite similar to the DDPM sampling algorithm. Both e-prediction and x_0 prediction will be transformed back to derive p(x_{t-1} | x_t), and both derivation rely on x_{t−1} =\sqrt{\alpha} f_\theta(x_t,t)+ \sqrt{1-\alpha} * N(0,1), where f_\theta(x_t,t) is the predicted x_0.

I think reading the last paragraph of section 4.2 could help.

My confusion is that you appear to rely on the forward process q(x_{t-1}| x_0) to sample, whereas DDPM samples by predicting the mean of backward process p(x_{t-1} | x_t) (which we learn through the closed form solution of q(x_{t-1} | x_t, x_0)). Is there any deduction I can find (perhaps in other papers that also use x_0 prediction) to prove that these two samplings are mathematically equivalent?

In other words, DDPM samples through q(x_{t-1} | x_t, x_0), but Diffusion-LM samples through q(x_{t-1} | f_\theta(x_t,t)).

Maybe checkout the last equation on page 17 of the Diffusion-LM ArXiv paper.

Screenshot 2022-10-25 at 3 58 33 PM

Thanks for your prompt reply! Yeah I understand the training loss is essentially the same. My question is regarding the sampling algorithm. I think if we follow DDPM to perform sampling, we are supposed to sample with the mean as defined above, with x_0 predicted by f_\theta(x_t,t)