title | emoji | colorFrom | colorTo | sdk | pinned | suggested_hardware |
---|---|---|---|---|---|---|
Real-Time Latent Consistency Model Image-to-Image ControlNet |
🖼️🖼️ |
gray |
indigo |
docker |
false |
a10g-small |
- I've added a Desktop Capture feature for Img2Img ControlNet/Canny.
- I've added several Windows Bat scripts to get you started more easily.
This demo showcases Latent Consistency Model (LCM) using Diffusers with a MJPEG stream server.
You need CUDA and Python 3.10, Mac with an M1/M2/M3 chip or Intel Arc GPU
TIMEOUT
: limit user session timeout
SAFETY_CHECKER
: disabled if you want NSFW filter off (Currently Disabled for Img2Img ControlNet/Canny).
MAX_QUEUE_SIZE
: limit number of users on current app instance
TORCH_COMPILE
: enable if you want to use torch compile for faster inference works well on A100 GPUs
python -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
OR
Run included "_Step_1_Install.bat"
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS. This requires OpenSSL to be installed on your system. It is only needed if you want to access the RT-LCM web UI remotely.
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload --log-level info --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
OR
Run included "_Step_2_Optional_Create_SSL_Needed_for_Remote_access.bat"
Based pipeline from taabata
uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload
OR Run "_Step_3a_Start_RTLCM-With_SSL_ControlNet.bat" if you are using an SSL Cert.
Run "_Step_3b_Start_RTLCM-Without_SSL_ControlNet.bat" if you are not using an SSL Cert.
Img2Img w/ ControlNet is the only mode I've updated with "Capture Desktop" option at this point.
uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. Learn more here or technical report
uvicorn "app-controlnetlora:app" --host 0.0.0.0 --port 7860 --reload
uvicorn "app-txt2imglora:app" --host 0.0.0.0 --port 7860 --reload
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 --reload
You need NVIDIA Container Toolkit for Docker
docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live
or with environment variables
docker run -ti -e TIMEOUT=0 -e SAFETY_CHECKER=False -p 7860:7860 --gpus all lcm-live
https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model