/AnimateDiff-I2V

AnimateDiff I2V version.

Primary LanguagePythonApache License 2.0Apache-2.0

AnimateDiff

I may stop develop this repo right now, AnimateDiff is not designed to do I2V mission at first, I spent lots of time to read diffusers source codes, this route maybe not the best compared to webui(ldm injection) at the end.

Though it has potential, I believe new motion model trained on bigger-datasets/specific-motion will be released soon.

Still under development.

TODO

  • update diffusers to 0.20.1
  • support IP-Adapter
  • reconstruction codes and make animatediff a diffusers plugin like sd-webui-animatediff
  • controlnet from TDS4874
  • solve/locate color degrade problem, check TDS_ solution, It seems that any color problems came from DDIM params.
  • controlnet reference mode
  • controlnet multi module mode
  • ddim inversion from Tune-A-Video
  • support AnimateDiff v2
  • support AnimateDiff MotionLoRA
  • support FreeU
  • keyframe controlnet apply
  • controlnet inpainting mode
  • support AnimateDiff v3 wo SparseCtrl
  • keyframe prompts apply

Experience

Multi Controlnet

inpainting + canny

inpainting + canny
tail + tail

MotionLoRA I2V results:

Zoom In / Zoom Out results from this old branch

Ablation experiment (controlnet/ipadapter)

results from this old branch

all / without denoise strength / without ipadapter / without controlnet(first frame)

Origin SD1.5 I2V attempt

Below is old results from this ols branch

First image from pikalabs, second was generated from sd1.5

First used IPAdapter+init-image-denoise, second used only IPAdapter

Training

  • 23.8.22: Drop local training scripts, using authors repo to do training experiences(I2V). First, make image injection refer IP-Adapter. Already test in AI_power.

First I2V attempt

Character Model:Yoimiya (with an initial reference image.)

Character Model:Yae Miko (with an initial reference image.)
without Character Model, frame 20

Original README

check README.md