Improved AnimateDiff integration for ComfyUI, initially adapted from sd-webui-animatediff but changed greatly since then. Please read the AnimateDiff repo README for more information about how it works at its core.
Examples shown here will also often make use of two helpful set of nodes:
- ComfyUI-Advanced-ControlNet for loading files in batches and controlling which latents should be affected by the ControlNet inputs (work in progress, will include more advance workflows + features for AnimateDiff usage later).
- comfy_controlnet_preprocessors for ControlNet preprocessors not present in vanilla ComfyUI; this repo is archived, and future development by the dev will happen here: comfyui_controlnet_aux. While most preprocessors are common between the two, some give different results. Workflows linked here use the archived version, comfy_controlnet_preprocessors. (TODO: I'll reinvestigate with more recent changes and update as needed)
- Clone this repo into
custom_nodes
folder.
- Download motion modules. You will need at least 1. Different modules produce different results.
- Original models
mm_sd_v14
,mm_sd_v15
, andmm_sd_v15_v2
: Google Drive | HuggingFace | CivitAI | Baidu NetDisk. - Stabilized finetunes of mm_sd_v14,
mm-Stabilized_mid
andmm-Stabilized_high
, by manshoety: HuggingFace - Higher resolution finetune,
temporaldiff-v1-animatediff
by CiaraRowles: HuggingFace
- Original models
- Place models in
ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/models
. They can be renamed if you want. - Get creative! If it works for normal image generation, it (probably) will work for AnimateDiff generations. Latent upscales? Go for it. ControlNets, one or more stacked? You betcha. Masking the conditioning of ControlNets to only affect part of the animation? Sure. Try stuff and you will be surprised by what you can do. Samples with workflows are included below.
- Compatible with a variety of samplers, vanilla KSampler nodes and KSampler (Effiecient) nodes.
- ControlNet support - both per-frame, or "interpolating" between frames; can kind of use this as img2video (see workflows below)
- Infinite animation length support using sliding context windows (introduced 9/17/23)
- Prompt travel, and in general more control over per-frame conditioning
- Alternate context schedulers and context types
The only required node to use AnimateDiff, the Loader outputs a model that will perform AnimateDiff functionality when passed into a sampling node.
Inputs:
- model: model to setup for AnimateDiff usage. Must be a SD1.5-derived model.
- context_options: optional context window to use while sampling; if passed in, total animation length has no limit. If not passed in, animation length will be limited to either 24 or 32 frames, depending on motion model.
- model_name: motion model to use with AnimateDiff.
- beta_schedule: noise scheduler for SD.
sqrt_linear
is the intended way to use AnimateDiff, with expected saturation. However,linear
can give useful results as well, so feel free to experiment.
Outputs:
- MODEL: model injected to perform AnimateDiff functions
To use, just plug in your model into the AnimateDiff Loader. When the output model (and any derivative of it in this pathway) is passed into a sampling node, AnimateDiff will do its thing.
The desired animation length is determined by the latents passed into the sampler. With context_options connected, there is no limit to the amount of latents you can pass in, AKA unlimited animation length. When no context_options are connected, the sweetspot is 16 latents passed in for best results, with a limit of 24 or 32 based on motion model loaded. These same rules apply to Uniform Context Option's context_length.
TODO: fill this out
Samples (download or drag images of the workflows into ComfyUI to instantly load the corresponding workflows!)
aaa_readme_00003_.webm
aaa_readme_00018_.webm
TODO: add generated image here (gif is too big for github)
txt2img w/ ControlNet-stabilized latent-upscale (partial denoise on upscale, Scaled Soft ControlNet Weights)
(open_pose images provided courtesy of toyxyz)
Training data used by the authors of the AnimateDiff paper contained Shutterstock watermarks. Since mm_sd_v15 was finetuned on finer, less drastic movement, the motion module attempts to replicate the transparency of that watermark and does not get blurred away like mm_sd_v14. Using other motion modules, or combinations of them using Advanced KSamplers should alleviate watermark issues.