/PIA_LocalHost_Windows

LocalHost of PIA in Windows

Primary LanguagePythonApache License 2.0Apache-2.0

PIA:Personalized Image Animator

PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models

Yiming Zhang†, Zhening Xing†, Yanhong Zeng, Youqing Fang, Kai Chen*

(*Corresponding Author, †Equal Contribution)

arXiv Project Page Open in OpenXLab

PIA is a personalized image animation method which can generate videos with high motion controllability and strong text and image alignment.

What's New

[2023/12/22] Release the model and demo of PIA. Try it to make your personalized movie!

Setup of Local Host in Windows)

1.local_Install_cn.ps1 (1.local_Install.ps1 if you not in China)

run 1.local_Install_cn.ps1 in PowerShell, it will create venv for PIA

2.downloading_model.ps1

run 2.downloading_model.ps1 to download the PIA ckpt and StableDiffusion. If it dosen't work,please put checkpoints by yourself as follows:

└── models
    ├── DreamBooth_LoRA
    │   ├── ...
    ├── PIA
    │   ├── pia.ckpt
    └── StableDiffusion
        ├── vae
        ├── unet
        └── ...

2.1.mklink_of_SDwebUI(optional).ps1

If you already have SDwebUI, you can run this code to link the ckpt/lora path of SDwebUI with the PIA ckpt/lora path.

Usage

3.local_run.ps1

Image Animation

Image to Video result can be obtained by:

python inference.py --config=example/config/lighthouse.yaml
python inference.py --config=example/config/harry.yaml
python inference.py --config=example/config/majic_girl.yaml

Run the command above, you will get:

Input Image

lightning, lighthouse

sun rising, lighthouse

fireworks, lighthouse

Input Image

1boy smiling

1boy playing the magic fire

1boy is waving hands

Input Image

1girl is smiling

1girl is crying

1girl, snowing

Motion Magnitude

You can control the motion magnitude through the parameter magnitude:

python inference.py --config=example/config/xxx.yaml --magnitude=0 # Small Motion
python inference.py --config=example/config/xxx.yaml --magnitude=1 # Moderate Motion
python inference.py --config=example/config/xxx.yaml --magnitude=2 # Large Motion

Examples:

python inference.py --config=example/config/labrador.yaml
python inference.py --config=example/config/bear.yaml
python inference.py --config=example/config/genshin.yaml

Input Image
& Prompt

Small Motion

Moderate Motion

Large Motion

a golden labrador is running
1bear is walking, ...
cherry blossom, ...

Style Transfer

To achieve style transfer, you can run the command(Please don't forget set the base model in xxx.yaml):

Examples:

python inference.py --config example/config/concert.yaml --style_transfer
python inference.py --config example/config/ania.yaml --style_transfer

Input Image
& Base Model

1man is smiling

1man is crying

1man is singing

Realistic Vision
RCNZ Cartoon 3d

1girl smiling

1girl open mouth

1girl is crying, pout

RCNZ Cartoon 3d

Loop Video

You can generate loop by using the parameter --loop

python inference.py --config=example/config/xxx.yaml --loop

Examples:

python inference.py --config=example/config/lighthouse.yaml --loop
python inference.py --config=example/config/labrador.yaml --loop

Input Image

lightning, lighthouse

sun rising, lighthouse

fireworks, lighthouse

Input Image

labrador jumping

labrador walking

labrador running

AnimateBench

We have open-sourced AnimateBench on HuggingFace which includes images, prompts and configs to evaluate PIA and other image animation methods.

Contact Us

Yiming Zhang: zhangyiming@pjlab.org.cn

Zhening Xing: xingzhening@pjlab.org.cn

Yanhong Zeng: zengyanhong@pjlab.org.cn

Acknowledgements

The code is built upon AnimateDiff, Tune-a-Video and PySceneDetect