/AnimateAnyone

Unofficial Implementation of Animate Anyone by Novita AI

Primary LanguagePythonApache License 2.0Apache-2.0

Animate Anyone

Novita AI

Overview

This repository currently provides the unofficial pre-trained weights and inference code of Animate Anyone. It is inspired by the implementation of the MooreThreads/Moore-AnimateAnyone repository and we made some adjustments to the training process and datasets.

Samples

demo1.mp4
demo4.mp4
demo2.mp4
demo3.mp4

Quickstart

Build Environtment

We Recommend a python version >=3.10 and cuda version =11.7. Then build environment as follows:

# [Optional] Create a virtual env
python -m venv .venv
source .venv/bin/activate
# Install with pip:
pip install -r requirements.txt

Download weights

Automatically downloading: You can run the following command to download weights automatically:

python tools/download_weights.py

Weights will be placed under the ./pretrained_weights direcotry. The whole downloading process may take a long time.

Inference

Here is the cli command for running inference scripts:

python -m scripts.pose2vid --config ./configs/prompts/animation.yaml -W 512 -H 784 -L 64

You can refer the format of animation.yaml to add your own reference images or pose videos. To convert the raw video into a pose video (keypoint sequence), you can run with the following command:

python tools/vid2pose.py --video_path /path/to/your/video.mp4

Or try it on Novita AI

We've deployed this model on Novita AI, and you can try it out with Playground ➡️ https://novita.ai/playground#animate-anyone .

Acknowledgements

This project is based on MooreThreads/Moore-AnimateAnyone which is licensed under the Apache License 2.0. We thank to the authors of Animate Anyone and MooreThreads/Moore-AnimateAnyone, for their open research and exploration.