PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models
Yiming Zhang†, Zhening Xing†, Yanhong Zeng, Youqing Fang, Kai Chen*
(*Corresponding Author, †Equal Contribution)
PIA is a personalized image animation method which can generate videos with high motion controllability and strong text and image alignment.
[2023/12/22] Release the model and demo of PIA. Try it to make your personalized movie!
- Online Demo on OpenXLab
- Checkpoint on Google Drive or
conda env create -f environment.yaml
conda activate pia
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
bash download_bashscripts/1-RealisticVision.sh
bash download_bashscripts/2-RcnzCartoon.sh
bash download_bashscripts/3-MajicMix.sh
bash download_bashscripts/0-PIA.sh
You can also download pia.ckpt through this link on Google Drive
Put checkpoints as follows:
└── models
├── DreamBooth_LoRA
│ ├── ...
├── PIA
│ ├── pia.ckpt
└── StableDiffusion
├── vae
├── unet
└── ...
Image to Video result can be obtained by:
python inference.py --config=example/config/lighthouse.yaml
python inference.py --config=example/config/harry.yaml
python inference.py --config=example/config/majic_girl.yaml
Run the command above, you will get:
You can control the motion magnitude through the parameter magnitude:
python inference.py --config=example/config/xxx.yaml --magnitude=0 # Small Motion
python inference.py --config=example/config/xxx.yaml --magnitude=1 # Moderate Motion
python inference.py --config=example/config/xxx.yaml --magnitude=2 # Large Motion
Examples:
python inference.py --config=example/config/labrador.yaml
python inference.py --config=example/config/bear.yaml
python inference.py --config=example/config/genshin.yaml
Input Image |
Small Motion |
Moderate Motion |
Large Motion |
a golden labrador is running | |||
1bear is walking, ... | |||
cherry blossom, ... |
To achieve style transfer, you can run the command(Please don't forget set the base model in xxx.yaml):
Examples:
python inference.py --config example/config/concert.yaml --style_transfer
python inference.py --config example/config/ania.yaml --style_transfer
Input Image |
1man is smiling |
1man is crying |
1man is singing |
Realistic Vision | |||
RCNZ Cartoon 3d | |||
1girl smiling |
1girl open mouth |
1girl is crying, pout |
|
RCNZ Cartoon 3d |
You can generate loop by using the parameter --loop
python inference.py --config=example/config/xxx.yaml --loop
Examples:
python inference.py --config=example/config/lighthouse.yaml --loop
python inference.py --config=example/config/labrador.yaml --loop
Input Image |
lightning, lighthouse |
sun rising, lighthouse |
fireworks, lighthouse |
Input Image |
labrador jumping |
labrador walking |
labrador running |
We have open-sourced AnimateBench on HuggingFace which includes images, prompts and configs to evaluate PIA and other image animation methods.
Yiming Zhang: zhangyiming@pjlab.org.cn
Zhening Xing: xingzhening@pjlab.org.cn
Yanhong Zeng: zengyanhong@pjlab.org.cn
The code is built upon AnimateDiff, Tune-a-Video and PySceneDetect