fastai/diffusion-nbs

Perhaps bug in img2img example

Closed this issue · 5 comments

cly commented

Hey @johnowhitaker,

Might there be a bug in the img2img example here:

# Loop
for i, t in tqdm(enumerate(scheduler.timesteps)):
    if i > start_step: # << This is the only modification to the loop we do

Should this be i >= start_step?

I wasn't sure, to be honest! I think either works fine since we can decide what start_step means. I have vague recollections of there being some sort of off-by-one thing with the timesteps but I remember my specific rationale :) I'll check it out next time I have the notebook out and running, and will leave this issue open n the meantime in case anyone else has thoughts

cly commented

I tested the following and got back the exact same otter as the first example. This is basically img2img with an empty input image latent.

start_step = 0
noise = torch.randn((1, 4, 64, 64), generator=generator).to(torch_device)
latents = scheduler.add_noise(torch.zeros(1, 4, 64, 64).to(torch_device), noise, timesteps=torch.tensor([scheduler.timesteps[start_step]]))
latents = latents.to(torch_device).float()

# Loop
for i, t in tqdm(enumerate(scheduler.timesteps)):
    if i >= start_step: # << This is the only modification to the loop we do

Using the existing code, it doesn't work. Also, I noticed fewer artifacts with i >= start_step for some other samples!

cly commented

I can put up a PR if you'd like!

Sure, that would be great!
Please make sure only the relevant part of the notebook has been changed so that we don't end up storing a new set of cell outputs and such.

cly commented

Fixed in #21