/Lsmith

StableDiffusionWebUI accelerated using TensorRT

Primary LanguageTypeScriptApache License 2.0Apache-2.0

Lsmith is a fast StableDiffusionWebUI using high-speed inference technology with TensorRT


Benchmark

benchmark

Screenshots

  • Batch generation

lemons

  • img2img support

img2img

Installation

Docker (All platform) | Easy

  1. Clone repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Launch using Docker compose
docker-compose up --build

Data such as models and output images are saved in the docker-data directory.

Customization

There are two types of Dockerfile.

Dockerfile.full Build the TensorRT plugin. The build can take tens of minutes.
Dockerfile.lite Download the pre-built TensorRT plugin from Github Releases. Build times are significantly reduced.

You can change the Dockerfile to use by changing the value of services.lsmith.build.dockerfile in docker-compose.yml. By default it uses Dockerfile.lite.

Linux | Difficult

requirements

  • python 3.10
  • pip
  • CUDA
  • cuDNN < 8.6.0
  • TensorRT 8.5.x
  1. Clone Lsmith repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Enter the repository directory.
cd Lsmith
  1. Run launch.sh
ex.)
bash launch.sh --host 0.0.0.0

Windows | Difficult

requirements

  • python 3.10
  • pip
  • CUDA
  • cuDNN < 8.6.0
  • TensorRT 8.5.x
  1. Install nvidia gpu driver
  2. Instal cuda 11.x (Click here for the official guide)
  3. Instal cudnn 8.6.0 (Click here for the official guide)
  4. Install tensorrt 8.5.3.1 (Click here for the official guide)
  5. Clone Lsmith repository
git clone https://github.com/ddPn08/Lsmith.git
cd Lsmith
git submodule update --init --recursive
  1. Launch launch-user.bat

Usage

Once started, access <ip address>:<port number> (ex http://localhost:8000) to open the WebUI.

First of all, we need to convert our existing diffusers model to the tensorrt engine.

Building the TensorRT engine

  1. Click on the "Engine" tab
  2. Enter Huggingface's Diffusers model ID in Model ID (ex: CompVis/stable-diffusion-v1-4)
  3. Enter your Huggingface access token in HuggingFace Access Token (required for some repositories). Access tokens can be obtained or created from this page.
  4. Click the Build button to start building the engine.
    • There may be some warnings during the engine build, but you can safely ignore them unless the build fails.
    • The build can take tens of minutes. For reference it takes an average of 15 minutes on the RTX3060 12GB.

Generate images

  1. Select the model in the header dropdown.
  2. Click on the "Generate" tab
  3. Click "Generate" button.



Special thanks to the technical members of the AI 絵作り研究会, a Japanese AI image generation community.