/diffusion-ui

Frontend for deeplearning Image generation

Primary LanguageJavaScriptMIT LicenseMIT

diffusion-ui

This is a web interface frontend for the generation of images using diffusion models.

The goal is to provide an interface to online and offline backends doing image generation and inpainting like Stable Diffusion.

Documentation

The documentation is available here

Technologies

Diffusion UI was made using:

Features

  • Text-to-image
  • Image-to-Image:
    • from an uploaded image
    • from a drawing made on the interface
  • Inpainting
    • Including the possibility to draw inside an inpainting region
  • Modular support for different backends:
    • a basic Stable Diffusion backend
    • the full-featured automatic1111 fork
    • the online free Stable Horde
  • Modification of model parameters in left tab
  • Image gallery of previous image in the right tab
  • Allow to do variations and inpainting edits to previously generated images
  • Share the backend on your PC to use it on your smartphone or tablet

Frontend

The frontend is available at diffusionui.com (Note: You still need to have a local backend to make if work with Stable diffusion)

Or alternatively you can run it locally.

Backends

Stable Diffusion local backend

To install the Stable Diffusion backend, follow the instructions in the docs

Automatic1111 Stable Diffusion local backend

To use Automatic1111 fork of Stable Diffusion, follow the instructions here

Stable Horde online backend

To generate images for free using the Stable Horde, follow the instructions here

License

MIT License for the code here.

CreativeML Open RAIL-M license for the Stable Diffusion model.