yang-song/score_sde_pytorch

Why 999 must be multiplied when calculating score?

vrvrv opened this issue · 1 comments

vrvrv commented

Hi,

I have some question on your code.

if continuous or isinstance(sde, sde_lib.subVPSDE):
# For VP-trained models, t=0 corresponds to the lowest noise level
# The maximum value of time embedding is assumed to 999 for
# continuously-trained models.
labels = t * 999
score = model_fn(x, labels)
std = sde.marginal_prob(torch.zeros_like(x), t)[1]
else:
# For VP-trained models, t=0 corresponds to the lowest noise level
labels = t * (sde.N - 1)
score = model_fn(x, labels)
std = sde.sqrt_1m_alphas_cumprod.to(labels.device)[labels.long()]
score = -score / std[:, None, None, None]

When computing the score matching loss, it seems that you post process the output of model(A score model). But it doesn't make sense for me to multiplying 999 to t and doing scaling score = - score / std while you use the output of model on reverse(sampling) process.

drift, diffusion = sde_fn(x, t)
score = score_fn(x, t)
drift = drift - diffusion[:, None, None, None] ** 2 * score * (0.5 if self.probability_flow else 1.)

Am I missing something?

I did those just to make sure it matches the implementation of DDPM. They are by no means natural, and are also not necessary.