A packaged version of OOTDiffusion that works with Pip.
No need to manually download models, checkpoints, weights etc. Should work out if the box.
Needs CUDA and GPU.
Try on Replicate: dc (full body)
Try on Replicate: hd (half body)
pip install git+https://github.com/qiweiii/oot_diffusion.git
Examples for Colab (T4 GPU is enough). But can be used anywhere.
If you don't set hg_root, a folder called ootd_models will be created in your working dir.
Load model
from oot_diffusion import OOTDiffusionModel
from PIL import Image
from pathlib import Path
def get_ootd_model():
model = OOTDiffusionModel(
hg_root="/content/models",
cache_dir="/content/drive/MyDrive/hf_cache",
)
return model
Generate image
def generate_image():
model = get_ootd_model()
generated_images, mask_image = model.generate(
model_path="/YOUR_MODEL.jpg",
cloth_path="/YOUR_GARMENT.jpg",
steps=10,
cfg=2.0,
num_samples=2,
)
return generated_images, mask_image
Display images
from IPython.display import display
generated_images, mask_image = generate_image()
for image in generated_images:
display(image)
display(mask_image)
The original author of packaged ootd
The original author of oot cog samples
The original authors of OOTDiffusion
The authors of ComfyUI-OOTDiffusion, who made it easier to package the code.
See oms-Diffusion for the official implementation of OOTDiffusion.
This repo is created on the shoulder of amazing projects.
I created this repo for deploying the full-body version to replicate.
- [cog-dc] for full-body api
- [cog-hd] for upper body