wl-zhao/VPD

t set to 0 in unet

elhamAm opened this issue · 1 comments

shouldn't this line be zeros:

t = torch.ones((img.shape[0],), device=img.device).long()

in the paper you mention that you put t to 0 in order not to have any noise added to the latent embedding.

That's right. In the initial implementation of our method, we set t using torch.ones to avoid any potential numerical issue (and it turns out later that there is no problem setting t=0 in the code). I believe whether t=1 or t=0 would not affect the performance because the total num_timesteps is 1000.