How did you derive your sampling algo?
jzhang38 opened this issue · 4 comments
Hi Lisa,
Thanks for your wonderful work.
May I ask how did you derive the sampling algo mathmatically for x_0 prediction? (I am looking for the sort of proof given in DDPM regarding the e-prediction)
This is actually quite similar to the DDPM sampling algorithm. Both e-prediction and x_0 prediction will be transformed back to derive p(x_{t-1} | x_t), and both derivation rely on x_{t−1} =\sqrt{\alpha} f_\theta(x_t,t)+ \sqrt{1-\alpha} * N(0,1), where f_\theta(x_t,t) is the predicted x_0.
I think reading the last paragraph of section 4.2 could help.
My confusion is that you appear to rely on the forward process q(x_{t-1}| x_0) to sample, whereas DDPM samples by predicting the mean of backward process p(x_{t-1} | x_t) (which we learn through the closed form solution of q(x_{t-1} | x_t, x_0)). Is there any deduction I can find (perhaps in other papers that also use x_0 prediction) to prove that these two samplings are mathematically equivalent?
In other words, DDPM samples through q(x_{t-1} | x_t, x_0), but Diffusion-LM samples through q(x_{t-1} | f_\theta(x_t,t)).
Maybe checkout the last equation on page 17 of the Diffusion-LM ArXiv paper.