/automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models

Primary LanguagePythonGNU Affero General Public License v3.0AGPL-3.0

SD.Next

Stable Diffusion implementation with advanced features

Sponsors Last Commit License Discord

Wiki | Discord | Changelog


Table of contents

SD.Next Features

All individual features are not listed here, instead check ChangeLog for full list of changes

  • Multiple backends!
    Diffusers | Original
  • Multiple UIs!
    Standard | Modern
  • Multiple diffusion models!
    Stable Diffusion 1.5/2.1/XL/3.0/3.5 | LCM | Lightning | Segmind | Kandinsky | Pixart-α | Pixart-Σ | Stable Cascade | FLUX.1 | AuraFlow | Würstchen | Alpha Lumina | Kwai Kolors | aMUSEd | DeepFloyd IF | UniDiffusion | SD-Distilled | BLiP Diffusion | KOALA | SDXS | Hyper-SD | HunyuanDiT | CogView | OmniGen | Meissonic | etc.
  • Built-in Control for Text, Image, Batch and video processing!
    ControlNet | ControlNet XS | Control LLLite | T2I Adapters | IP Adapters
  • Multiplatform!
    Windows | Linux | MacOS with CPU | nVidia | AMD | IntelArc/IPEX | DirectML | OpenVINO | ONNX+Olive | ZLUDA
  • Platform specific autodetection and tuning performed on install
  • Optimized processing with latest torch developments with built-in support for torch.compile
    and multiple compile backends: Triton, ZLUDA, StableFast, DeepCache, OpenVINO, NNCF, IPEX, OneDiff
  • Improved prompt parser
  • Enhanced Lora/LoCon/Lyco code supporting latest trends in training
  • Built-in queue management
  • Enterprise level logging and hardened API
  • Built in installer with automatic updates and dependency management
  • Modernized UI with theme support and number of built-in themes (dark and light)
  • Mobile compatible

Main interface using StandardUI:
screenshot-text2image

Main interface using ModernUI:

screenshot-modernui-f1 screenshot-modernui screenshot-modernui-sd3

For screenshots and informations on other available themes, see Themes Wiki


Model support

Additional models will be added as they become available and there is public interest in them
See models overview for details on each model, including their architecture, complexity and other info

Also supported are modifiers such as:

  • LCM, Turbo and Lightning (adversarial diffusion distillation) networks
  • All LoRA types such as LoCon, LyCORIS, HADA, IA3, Lokr, OFT
  • IP-Adapters for SD 1.5 and SD-XL
  • InstantID, FaceSwap, FaceID, PhotoMerge
  • AnimateDiff for SD 1.5
  • MuLAN multi-language support

Platform support

  • nVidia GPUs using CUDA libraries on both Windows and Linux
  • AMD GPUs using ROCm libraries on Linux
    Support will be extended to Windows once AMD releases ROCm for Windows
  • Intel Arc GPUs using OneAPI with IPEX XPU libraries on both Windows and Linux
  • Any GPU compatible with DirectX on Windows using DirectML libraries
    This includes support for AMD GPUs that are not supported by native ROCm libraries
  • Any GPU or device compatible with OpenVINO libraries on both Windows and Linux
  • Apple M1/M2 on OSX using built-in support in Torch with MPS optimizations
  • ONNX/Olive

Backend support

SD.Next supports two main backends: Diffusers and Original:

  • Diffusers: Based on new Huggingface Diffusers implementation
    Supports all models listed below
    This backend is set as default for new installations
    See wiki article for more information
  • Original: Based on LDM reference implementation and significantly expanded on by A1111
    This backend and is fully compatible with most existing functionality and extensions written for A1111 SDWebUI
    Supports SD 1.x and SD 2.x models
    All other model types such as SD-XL, LCM, Stable Cascade, PixArt, Playground, Segmind, Kandinsky, etc. require backend Diffusers

Examples

IP Adapters: screenshot-ipadapter

Color grading:
screenshot-control

InstantID:
screenshot-instantid

Important

  • Loading any model other than standard SD 1.x / SD 2.x requires use of backend Diffusers
  • Loading any other models using Original backend is not supported
  • Loading manually download model .safetensors files is supported for specified models only (typically SD 1.x / SD 2.x / SD-XL models only)
  • For all other model types, use backend Diffusers and use built in Model downloader or
    select model from Networks -> Models -> Reference list in which case it will be auto-downloaded and loaded

Install

Tip

  • If you can't run SD.Next locally, try cloud deployment using RunDiffusion!
  • Server can run with or without virtual environment,
    Recommended to use VENV to avoid library version conflicts with other applications
  • nVidia/CUDA / AMD/ROCm / Intel/OneAPI are auto-detected if present and available,
    For any other use case such as DirectML, ONNX/Olive, OpenVINO specify required parameter explicitly
    or wrong packages may be installed as installer will assume CPU-only environment
  • Full startup sequence is logged in sdnext.log,
    so if you encounter any issues, please check it first

Run

Once SD.Next is installed, simply run webui.ps1 or webui.bat (Windows) or webui.sh (Linux or MacOS)

For list of available command line options, run webui --help for the full & up-to-date list

Tip

All command line options can also be set via env variable For example --debug is same as set SD_DEBUG=true

Notes

Tip

If you don't want to use built-in venv support and prefer to run SD.Next in your own environment such as Docker container, Conda environment or any other virtual environment, you can skip venv create/activate and launch SD.Next directly using python launch.py (command line flags noted above still apply).

Quantization

SD.Next comes with broad quantization support, including support for BitsAndBytes, Optimum.Quanto, TorchAO, NNCF and GGUF See Quantization Wiki

Control

SD.Next comes with built-in control for all types of text2image, image2image, video2video and batch processing

Control interface:
screenshot-control

Control processors:
screenshot-processors

Masking: screenshot-mask

Extensions

SD.Next comes with several extensions pre-installed:

Collab

  • We'd love to have additional maintainers (with comes with full repo rights). If you're interested, ping us!
  • In addition to general cross-platform code, desire is to have a lead for each of the main platforms
    This should be fully cross-platform, but we'd really love to have additional contributors and/or maintainers to join and help lead the efforts on different platforms

Credits

Evolution

starts

Docs

If you're unsure how to use a feature, best place to start is Wiki and if its not there,
check ChangeLog for when feature was first introduced as it will always have a short note on how to use it

Sponsors

Allan GrantBrent OzarMatthew Runoa.v.mantzarisSML (See-ming Lee)