Off by 1 --ddim_steps inputted vs what's shown
mi7chy opened this issue · 3 comments
Unless I'm interpreting the output incorrectly, specifying --ddim_steps 29 shows as 30 on the output and --ddim_steps 30 shows as 31 but --ddim_steps 25 shows correctly as 25.
optimized_txt2img.py --prompt "apple tree" --H 1152 --W 1088 --ddim_steps 29
Global seed set to 18163
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
UNet: Running in eps-prediction mode
CondStage: Running in eps-prediction mode
FirstStage: Running in eps-prediction mode
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using prompt: apple tree
Sampling: 0%| | 0/1 [00:00<?, ?it/s]seeds used = [18163] | 0/1 [00:00<?, ?it/s]
Data shape for PLMS sampling is [1, 4, 144, 136]
Running PLMS Sampling with 30 timesteps
PLMS Sampler: 100%|████████████████████████████████████████████████████████████████████| 30/30 [02:33<00:00, 5.11s/it]
torch.Size([1, 4, 144, 136])
saving images 100%|████████████████████████████████████████████████████████████████████| 30/30 [02:33<00:00, 4.86s/it]
memory_final = 7.998976
data: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [02:38<00:00, 158.59s/it]
Sampling: 100%|█████████████████████████████████████████████████████████████████████████| 1/1 [02:38<00:00, 158.59s/it]
Samples finished in 2.86 minutes and exported to outputs/txt2img-samples\apple_tree
Seeds used = 18163
optimized_txt2img.py --prompt "apple tree" --H 1152 --W 1088 --ddim_steps 30
Global seed set to 545787
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
UNet: Running in eps-prediction mode
CondStage: Running in eps-prediction mode
FirstStage: Running in eps-prediction mode
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using prompt: apple tree
Sampling: 0%| | 0/1 [00:00<?, ?it/s]seeds used = [545787] | 0/1 [00:00<?, ?it/s]
Data shape for PLMS sampling is [1, 4, 144, 136]
Running PLMS Sampling with 31 timesteps
PLMS Sampler: 100%|████████████████████████████████████████████████████████████████████| 31/31 [02:38<00:00, 5.10s/it]
torch.Size([1, 4, 144, 136])
saving images 100%|████████████████████████████████████████████████████████████████████| 31/31 [02:38<00:00, 4.86s/it]
memory_final = 7.998976
data: 100%|█████████████████████████████████████████████████████████████████████████████| 1/1 [02:43<00:00, 163.35s/it]
Sampling: 100%|█████████████████████████████████████████████████████████████████████████| 1/1 [02:43<00:00, 163.35s/it]
Samples finished in 2.94 minutes and exported to outputs/txt2img-samples\apple_tree
Seeds used = 545787
optimized_txt2img.py --prompt "ethereal mystery portal, seen by wanderer boy in middle of woods, vivid colors, fantasy, trending on artstation, artgerm, cgsociety, greg rutkwoski, alphonse mucha, unreal engine, very smooth, high detail, 4 k, concept art, brush strokes, pixiv art, sharp focus, raging dynamic sky, heavens" --H 768 --W 768 --ddim_steps 25 --scale 7 --seed 3733741481
Global seed set to 3733741481
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
UNet: Running in eps-prediction mode
CondStage: Running in eps-prediction mode
FirstStage: Running in eps-prediction mode
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using prompt: ethereal mystery portal, seen by wanderer boy in middle of woods, vivid colors, fantasy, trending on artstation, artgerm, cgsociety, greg rutkwoski, alphonse mucha, unreal engine, very smooth, high detail, 4 k, concept art, brush strokes, pixiv art, sharp focus, raging dynamic sky, heavens
Sampling: 0%| | 0/1 [00:00<?, ?it/s]seeds used = [3733741481] | 0/1 [00:00<?, ?it/s]
Data shape for PLMS sampling is [1, 4, 96, 96]
Running PLMS Sampling with 25 timesteps
PLMS Sampler: 100%|████████████████████████████████████████████████████████████████████| 25/25 [00:49<00:00, 2.00s/it]
torch.Size([1, 4, 96, 96])
saving images 100%|████████████████████████████████████████████████████████████████████| 25/25 [00:49<00:00, 1.84s/it]
memory_final = 4.935168
data: 100%|██████████████████████████████████████████████████████████████████████████████| 1/1 [00:54<00:00, 54.08s/it]
Sampling: 100%|██████████████████████████████████████████████████████████████████████████| 1/1 [00:54<00:00, 54.08s/it]
Samples finished in 1.13 minutes and exported to outputs/txt2img-samples\ethereal_mystery_portal,_seen_by_wanderer_boy_in_middle_of_woods,_vivid_colors,_fantasy,_trending_on_artstation,_artgerm,_cgso
Seeds used = 3733741481
Thank you for linking the fix. Can confirm it resolves the issue.