t set to 0 in unet
elhamAm opened this issue · 1 comments
elhamAm commented
shouldn't this line be zeros:
VPD/segmentation/models/vpd_seg.py
Line 96 in 940fc5f
in the paper you mention that you put t to 0 in order not to have any noise added to the latent embedding.
wl-zhao commented
That's right. In the initial implementation of our method, we set t using torch.ones
to avoid any potential numerical issue (and it turns out later that there is no problem setting t=0 in the code). I believe whether t=1 or t=0 would not affect the performance because the total num_timesteps is 1000.