/Prompt-Free-Diffusion

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models

Primary LanguagePythonMIT LicenseMIT

Prompt-Free Diffusion

Huggingface space Framework: PyTorch License: MIT

This repo hosts the official implementation of:

Xingqian Xu, Jiayi Guo, Zhangyang Wang, Gao Huang, Irfan Essa, and Humphrey Shi, Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, Paper arXiv Link.

News

Introduction

Prompt-Free Diffusion is a diffusion model that relys on only visual inputs to generate new images, handled by Semantic Context Encoder (SeeCoder) by substituting the commonly used CLIP-based text encoder. SeeCoder is reusable to most public T2I models as well as adaptive layers like ControlNet, LoRA, T2I-Adapter, etc. Just drop in and play!

Performance

Network

Setup

conda create -n prompt-free-diffusion python=3.10
conda activate prompt-free-diffusion
pip install torch==2.0.0+cu117 torchvision==0.15.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -r requirements.txt

Demo

We provide a WebUI empowered by Gradio. Start the WebUI with the following command:

python app.py

Pretrained models

To support the full functionality of our demo. You need the following models located in these paths:

└── pretrained
    β”œβ”€β”€ pfd
    |   β”œβ”€β”€ vae
    |   β”‚   └── sd-v2-0-base-autokl.pth
    |   β”œβ”€β”€ diffuser
    |   β”‚   β”œβ”€β”€ AbyssOrangeMix-v2.safetensors
    |   β”‚   β”œβ”€β”€ AbyssOrangeMix-v3.safetensors
    |   β”‚   β”œβ”€β”€ Anything-v4.safetensors
    |   β”‚   β”œβ”€β”€ Deliberate-v2-0.safetensors
    |   β”‚   β”œβ”€β”€ OpenJouney-v4.safetensors
    |   β”‚   β”œβ”€β”€ RealisticVision-v2-0.safetensors
    |   β”‚   └── SD-v1-5.safetensors
    |   └── seecoder
    |       β”œβ”€β”€ seecoder-v1-0.safetensors
    |       β”œβ”€β”€ seecoder-pa-v1-0.safetensors
    |       └── seecoder-anime-v1-0.safetensors
    └── controlnet
        β”œβ”€β”€ control_sd15_canny_slimmed.safetensors
        β”œβ”€β”€ control_sd15_depth_slimmed.safetensors
        β”œβ”€β”€ control_sd15_hed_slimmed.safetensors
        β”œβ”€β”€ control_sd15_mlsd_slimmed.safetensors
        β”œβ”€β”€ control_sd15_normal_slimmed.safetensors
        β”œβ”€β”€ control_sd15_openpose_slimmed.safetensors
        β”œβ”€β”€ control_sd15_scribble_slimmed.safetensors
        β”œβ”€β”€ control_sd15_seg_slimmed.safetensors
        β”œβ”€β”€ control_v11p_sd15_canny_slimmed.safetensors
        β”œβ”€β”€ control_v11p_sd15_lineart_slimmed.safetensors
        β”œβ”€β”€ control_v11p_sd15_mlsd_slimmed.safetensors
        β”œβ”€β”€ control_v11p_sd15_openpose_slimmed.safetensors
        β”œβ”€β”€ control_v11p_sd15s2_lineart_anime_slimmed.safetensors
        β”œβ”€β”€ control_v11p_sd15_softedge_slimmed.safetensors
        └── preprocess
            β”œβ”€β”€ hed
            β”‚   └── ControlNetHED.pth
            β”œβ”€β”€ midas
            β”‚   └── dpt_hybrid-midas-501f0c75.pt
            β”œβ”€β”€ mlsd
            β”‚   └── mlsd_large_512_fp32.pth
            β”œβ”€β”€ openpose
            β”‚   β”œβ”€β”€ body_pose_model.pth
            β”‚   β”œβ”€β”€ facenet.pth
            β”‚   └── hand_pose_model.pth
            └── pidinet
                └── table5_pidinet.pth

All models can be downloaded at Hugging Face link.

Tools

We also provide tools to convert pretrained models from sdwebui and huggingface diffuser library to this codebase, please modify the following files:

└── tools
    β”œβ”€β”€ get_controlnet.py
    └── model_conversion.pth

You are expected to do some customized coding to make it work (i.e. changing hardcoded input output file paths)

Performance Anime

Citation

Acknowledgement

Part of the codes reorganizes/reimplements code from the following repositories: Versatile Diffusion official Github and ControlNet sdwebui Github, which are also great influenced by LDM official Github and DDPM official Github