- For Windows using WSL2, write a .wslconfig File in your user path with following content:
[wsl2]
memory=16GB # Limits VM memory in WSL 2 to X GB
processors=4 # Makes the WSL 2 VM use X virtual processors
localhostForwarding=true
-
You can play with the values if your PC can't handle it.
-
Install Docker Desktop for Windows
-
Make sure you are running the newest NVIDIA Drivers and newest Docker Desktop Version.
-
See https://docs.microsoft.com/en-us/windows/ai/directml/gpu-cuda-in-wsl for more information about WSL support.
(all newer Driver and Docker versions should support it out of the box.)
Test with:
wsl cat /proc/version
Needs kernel version of 5.10.43.3 or higher.
-
-
Build image yourself with:
docker build -t stable-diffusion-guitard .
-
Run prebuild image with:
docker run -d --gpus all -p 7860:8080 -v ./.cache/app:/root/.cache -v ./.cache/facexlib/:/opt/conda/envs/ldm/lib/python3.8/site-packages/facexlib/weights/ -v ./models/:/models/ -v ./outputs/:/outputs/ -e RUN_MODE=false sharrnah/stable-diffusion-guitard
(replace "
sharrnah/stable-diffusion-guitard
" image name with "stable-diffusion-guitard
" to run self-build image)change "RUN_MODE" depending on your machine. (see Options for more info)
-
Download / Clone this repo.
(Or just get the docker-compose.yaml file if you want to use the prebuild image and ignore "build image yourself" step.)
-
Start docker-compose project.
- building image yourself:
docker compose -f docker-compose.build.yaml up -d --build
- start prebuild image with:
docker compose up -d
-
See current logs with:
docker compose logs stablediffusion -f
-
You can exec into the container with:
docker compose exec stablediffusion bash
- You can set the environment variable "RUN_MODE" to one of these setting:
- "
OPTIMIZED
" For reduced Memory mode (sacrificing speed) - "
OPTIMIZED-TURBO
" For lesser reduced Memory mode (sacrificing less speed) - "
GTX16
" When generated images are green (known problem on GTX 16xx GPUs) - "
GTX16-TURBO
" When generated images are green (known problem on GTX 16xx GPUs) [using OPTIMIZED-TURBO] - "
FULL-PRECISION
" use full precision
- "
- Set the environment variable "WEBUI_RELAUNCH" to
- "
true
" (default) For automatic restarting of the WebUI - "
false
" Disables automatic restarting of the WebUI
- "
For that you can create a .env
file and set the content to
RUN_MODE=OPTIMIZED
or
RUN_MODE=GTX16
(See example.env
file including all possible values)
-
after the webgui started successfully you should see a log output telling
Running on local URL: http://127.0.0.1:7860/
-
See current log with:
docker compose logs stablediffusion -f
-
Open http://127.0.0.1:7860/ in your Browser to use it.
-
All generated images are saved into the
./outputs/
directory.
Models should be downloaded automatically on first run now!
-
Download the v1.4 Stable Diffusion model from one of the following sources:
-
Web:
or
-
Torrent Magnet:
-
magnet:?xt=urn:btih:3a4a612d75ed088ea542acac52f9f45987488d1c&dn=sd-v1-4.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337
-
-
Hugging face:
-
-
Place the downloaded model file into the
models/
directory and name itSDv1.4.ckpt
(Case-Sensitive)
-
Download the GFPGAN v1.3.0 model from here:
-
Place the downloaded model file into the
models/
directory and name itGFPGANv1.3.pth
(Case-Sensitive)
-
Download the RealESRGAN x4plus model from here:
-
Download the RealESRGAN x4plus anime model from here:
-
Place the downloaded model files into the
models/
directory and name themRealESRGAN_x4plus.pth
andRealESRGAN_x4plus_anime_6B.pth
(Case-Sensitive)