ali-vilab/UniAnimate

About the installation in windows, using powershell and miniconda3

zephirusgit opened this issue · 11 comments

I always find that I am missing many things in the requirements, in this case also that it uses nccl, for multiple GPUs, which is not yet available in Windows, (I did not try in WSL due to space issues, and I don't have any more GPUs either. ), with which gpt4 recommended that I deactivate it, returning to the installation, adding more things that I think are necessary, that are not in the description, I consulted with gpt4 to see what each thing corresponded to, finally it worked, Although with my 12GB card, (I see that it uses 21GB shared but it drags, it is very slow) I did not see it move from 0% although I see it processing, I share my notes in case anyone else encountered several errors when trying to launch the inference, I'm going to try to see if I can change something so that it doesn't use so much vram, to see if it becomes usable,


unanimate

git clone https://github.com/ali-vilab/UniAnimate.git
cd UniAnimate
conda create -n UniAnimate python=3.9
conda activate UniAnimate
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt

pip install modelscope

(create modeldownloader.py)
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/unianimate', cache_dir='checkpoints/')

mv ./checkpoints/iic/unianimate/* ./checkpoints/

pip install opencv-python
#https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/
pip install --upgrade --quiet langchain-experimental
pip install --upgrade --quiet pillow open_clip_torch torch matplotlib

#Of course, everyone should see their version of Cuda.
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
pip install rotary-embedding-torch
pip install fairscale
pip install nvidia-ml-py3
pip install easydict
pip install imageio
pip install pytorch-lightning
pip install args
conda install -c conda-forge pynvml

#(Edit inference_unianimate_entrance.py) and change nccl to gloo
dist.init_process_group(backend='gloo', world_size=cfg.world_size, rank=cfg.rank)

python inference.py --cfg configs/UniAnimate_infer.yaml

SOLVED thanks

According to the tutorial above, I encountered the following error message. Does anyone know how to solve it?

微信截图_20240712151923
微信截图_20240712152023

Hey, thanks for the instructions. I have a little problem here. I don't know what I should do to move forward. Can anyone give me a hint?

(pip install modelscope

(create modeldownloader.py)
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/unianimate', cache_dir='checkpoints/')

mv ./checkpoints/iic/unianimate/* ./checkpoints/)

According to the tutorial above, I encountered the following error message. Does anyone know how to solve it?

微信截图_20240712151923 微信截图_20240712152023

Hello I have the same issue. Did you manage to fix it?

Still waiting for good soul to reply ;)

According to the tutorial above, I encountered the following error message. Does anyone know how to solve it?
微信截图_20240712151923 微信截图_20240712152023

Hello I have the same issue. Did you manage to fix it?

No, I gave up

按照上面的教程,我遇到了下面的错误信息。有人知道如何解决吗?

微信截图_20240712151923 微信截图_20240712152023

你好,请问你解决了吗,我也是按照这个教程装了6遍,都失败了,期间也遇到过和你这个一样的问题,我也换过pytorch-cuda=12.x这些的版本,依然不行,现在放弃了

按照上面的教程,我遇到了下面的错误信息。有人知道如何解决吗?
微信截图_20240712151923 微信截图_20240712152023

你好,请问你解决了吗,我也是按照这个教程装了6遍,都失败了,期间也遇到过和你这个一样的问题,我也换过pytorch-cuda=12.x这些的版本,依然不行,现在放弃了

没有解决,我也放弃了

It should be noted that on my side, hijacking an already existing python env from A1111/ComfyUI worked without much need for any additional dependencies.

diffusers 0.28.2
pytorch-lightning 2.3.3
torch 2.2.2+cu121
torchvision 0.17.2+cu121
torchaudio 2.2.2+cu121
transformers 4.39.3

Anyway, After the pip install modelscope

Simply copy any already existing .py file and replace everything with

from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/unianimate', cache_dir='checkpoints/')

mv ./checkpoints/iic/unianimate/* ./checkpoints/

#Now Save As.. modeldownloader.py

Anaconda Prompt:

conda activate UniAnimate

cd \Path-to-UniAnimate

python modeldownloader.py

#The models should download to \Path-to-UniAnimate\checkpoints

#OR just download everything manually to \Path-to-UniAnimate\checkpoints
#Make sure the filenames are named correctly when downloading.

https://huggingface.co/camenduru/unianimate

I am not able to install requirements.txt
Some yanked version and no matching distribution found errors and installation is terminated. :(

Thanks for the instructions, when I do
pip install -r requirements.txt,
i just skip the error then until i run
python inference.py --cfg configs/UniAnimate_infer.yaml,
then i install the lack part.

It important to find that pytorch version can support both xformers and libuv, the newest torch not support libuv but if do
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu118
will update the pytorch that not suitable torchvision and torchaudio(I don't know whether this will effect),
but when update pytorch's version too high will not support libuv then python inference.py doesn't work,
i use pytorch==2.3.0 and xformers==0.0.26.post1 in python=3.9

conda install pytorch==2.3.0 torchvision==0.18.0 to rchaudio==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia

pip3 install -U xformers==0.0.26.post1+cu118 --index-url https://download.pytorch.org/whl/cu118

when you want to control xformers version, use
pip install xformers==version --index-url https://download.pytorch.org/whl/cu118
https://download.pytorch.org/whl/cu118 may be replaced by other if you use other cuda version