AUTOMATIC1111/stable-diffusion-webui

Allow saving intermediate Steps to separate image files during generation

aleksusklim opened this issue · 5 comments

Under the Setting tab there is an option called User interface → Show progressbar: "Show show (typo?) image creation progress every N sampling steps. Set 0 to disable."

I had set it to "1" and then I'm able to see all intermediate images at all lower steps during generation! (Yes, I'm aware that it could slightly hurt performance). But these images aren't saved anywhere, are they?

I'm running at 4GB VRAM, so each step takes a considerable amount of time (20 sec or so at default), so I don't want to repeat the whole process if I saw something interesting in intermediate image. Also it's not confinement to just sit there during generation to watch as image emerges – I would rather leave it for some time, then return to see how it’s going and whether I want to Interrupt it or continue.

I thought Grid mode will put each image on lower steps to separate cell, but apparently it regenerates the whole image each time, even if it's just Step count that changed.

Last question: as I'm new to Stable Diffusion, it is not clear for me whether applying "img2img" to intermediate result is the same as just letting it continue to next steps? In other words, if I make 15 steps and then save .png (along with all generation parameters) – how can I "continue from here" f.e. to step 30, without regeneration from the start?

If the lone resulting image is not enough to "continue", can we save something more like a "state" of the model – to load it later and continue stepping (possible with changed prompt, as you even have the special syntax to do this automatically!) at will?

So, to conclude:

  1. At bare minimum – an option to anyhow save all intermediate images during generation steps.
  2. Just as optimization for Grid: if any axis depends on Step – then generate just one image with highest number of steps, but save intermediate images at provided steps to correct cells.
  3. A mode with "infinite" number of steps, that can be interrupted/paused anytime (but it should count as success and save the image correctly) and then continued from here with an ability to alter every parameter that can be reasonably altered that way (for example, the prompt text but not the image resolution of course).
  4. Save those "states" (with all parameters) at each specified step, to be able to continue exactly from here. One good application I see is generation a lot samples at low step count, then cherry-picking good ones manually to continue them to mid step count, and then choosing the best ones to further continue to high steps. But, if each such state is technically will be very large (for example as large as the model itself, several gigabytes) – then this method will become unfeasible, comparing to direct regeneration of interested images from scratch. In that case, it is not really worth implementing.

Basically, everything is coming from the fact that I don't want to spend time to generate something that was already been generated but not saved ))
Sorry and thank you for --lowvram option!

I agree this would be useful. In the meantime note that you can save the script provided here at link below (as a text file with a .py extension) in the scripts folder, and it will give you a user script that saves every step. As is, it overwrites what was there before; and you can't also run another script (eg X/Y grid) - but it's probably easy enough to modify if you need it urgently.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#saving-steps-of-the-sampling-process

Thank you, it works!

Probably it will be a lot better if it would automatically create a folder in outputs/*2img-images/... (or as specified in Settings) with the name of target image without extension (using current pattern, default or specified in Settings too) and put all of the samples there. Also, a copy of .txt with generation parameters (if enabled in Settings) might be saved there too, before starting generations (so it may serve as a backup of parameters even if cancelled/failed).

Personally, I'm not quite into Python to implement these myself (despite I've managed to change the plot script to randomize seeds: #1049 (comment))

By the way, why source images for img2img and Extras aren't stored anywhere? By themselves they are in fact vital "generation parameters" too! I believe that a copy of each source image should be stored along the corresponding resulting image.

Whoops, the answer from npiguet changed everything: #1113 (comment)

My thinking was that if 10 steps brought 10u amount of details, then surely 40 steps would bring 40u of details, and I could stop the process anywhere in the middle to get, for example, 25u amount of details.

But in reality it behaves more like a sub-division of the distance between the noise and the final image. So imagine the noise is at distance 0 and the final image is at distance 1. When you set steps=10, then each step advances by 0.1. When you set steps=40, then each step advances by 0.025.

More steps doesn't get you "further", it gets you to the same destination but using smaller steps, which in theory helps the definition of the final image.

So there is no real sense in saving intermediate steps, nor continue from there...
Also "infinite" steps is not possible either.
No optimizations for Grid, alas.

Saving is OK with that script. Closing issue as not planned, thank everyone.

Adjust filename and interval, run this in the browser's inspect panel console, and allow multiple downloads from the page:

// stable-diffusion-webui live preview saver by Cees Timmerman, 2024-02-06.
function download_data(data, name) {
    const a = document.createElement('a')
    a.href = data
    a.download = name
    document.body.appendChild(a)
    a.click()
    document.body.removeChild(a)
}

let live_preview_data = ""
function save_live_preview() {
    if (!live_preview_saver_control.checked) return
    let data = document.querySelectorAll(".livePreview img")[0].src
    if (data != live_preview_data) {
        try{
            download_data(data, "" + Date.now() + "_live_preview.jpg")
        }catch(ex) {}
        live_preview_data = data
    }
}

const livePreviewSaver = setInterval(save_live_preview, 1000)

// To stop: clearInterval(livePreviewSaver)
const d = document.createElement('div')
d.innerHTML = '<input type="checkbox" checked id="live_preview_saver_control" name="live_preview_saver_control"><label for="live_preview_saver_control"> Save live previews</label>'
document.body.appendChild(d)

Whoops, the answer from npiguet changed everything: #1113 (comment)

My thinking was that if 10 steps brought 10u amount of details, then surely 40 steps would bring 40u of details, and I could stop the process anywhere in the middle to get, for example, 25u amount of details.
But in reality it behaves more like a sub-division of the distance between the noise and the final image. So imagine the noise is at distance 0 and the final image is at distance 1. When you set steps=10, then each step advances by 0.1. When you set steps=40, then each step advances by 0.025.
More steps doesn't get you "further", it gets you to the same destination but using smaller steps, which in theory helps the definition of the final image.

So there is no real sense in saving intermediate steps, nor continue from there... Also "infinite" steps is not possible either. No optimizations for Grid, alas.

Saving is OK with that script. Closing issue as not planned, thank everyone.

I want to save intermediate steps every n steps, but the scripts above can only save every one image, it is time-consuming because I want to get the full resolution image. How can I achieve it?