Sampling algorithm differ from paper.
ariel415el opened this issue · 5 comments
Hi,
I want to elaborate on #2:
The sampling algorithm in your paper is a bit different that what shown in the paper.
The paper suggests this sample step
The clipping is done here
diffusion/diffusion_tf/diffusion_utils.py
Line 172 in 1e0dceb
Now I checked and indeed, without the clipping, the two equations are the same.
Can you give any interpretation or intuition for the clipping and why it is needed?
It seem to be crucial for training while not mentioned in the paper
Thanks
Is there any update on this? In my experience this detail has been crucial in determining sample quality, yet it seems to be largely unaddressed with regards to diffusion models. Does anyone have any insight on this?
In https://huggingface.co/blog/annotated-diffusion, the author says:
Note that the code above is a simplified version of the original implementation. We found our simplification (which is in line with Algorithm 2 in the paper) to work just as well as the original, more complex implementation, which employs clipping.
The issue is that the predictions are often out of range. So the authors are are trying to impose some sort of a correction to get meaningful samples. To do that they are restricting x_reconstructed to -1 to +1 by clipping. So, here is how they generation samples
- Get error predictions at step t
- Get reconstructed image ie x_recon using error predictions
- Clip x_recon since we know x is in range 1 to -1
- using clipped x_recon, generate x_t
This is a hack and will lead to increased density at 1 and -1
I don't see in the paper the defintion of σt - where is it mentioned and defined? Why do we need to add noise in the reverse process?
I don't see in the paper the defintion of σt - where is it mentioned and defined? Why do we need to add noise in the reverse process?
To make it be a normal distribution.