/AnimateLCM

AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning

AnimateLCM

AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning

demo.mp4

Thank you all for your attention. For more details, please refer to our Project Page and Hugging Face Demo 🤗.

Video edited by AnimateLCM in 5 minutes with 1280x720p, find the original video in X:

Prompt: "green alien, red eyes"

0-seed-3882543293-alien.mp4

Non-cherry pick demo with our long video work.

0-seed-3882543293.mp4

Use the diffusers to test the beta version AnimateLCM text-to-video models.

import torch
from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter
from diffusers.utils import export_to_gif

adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=torch.float16)
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16)
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")

pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm-lora")
pipe.set_adapters(["lcm-lora"], [0.8])

pipe.enable_vae_slicing()
pipe.enable_model_cpu_offload()

output = pipe(
    prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution",
    negative_prompt="bad quality, worse quality, low resolution",
    num_frames=16,
    guidance_scale=2.0,
    num_inference_steps=6,
    generator=torch.Generator("cpu").manual_seed(0),
)
frames = output.frames[0]
export_to_gif(frames, "animatelcm.gif")

🎉 Check the advanced developpment of community: ComfyUI-AnimateLCM and ComfyUI-Reddit.

🎉 Awesome Workflow for AnimateLCM: Tutorial Video.

More code and weights will be released.

Reference

@artical{wang2024animatelcm,
      title={AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning}, 
      author={Fu-Yun Wang and Zhaoyang Huang and Xiaoyu Shi and Weikang Bian and Guanglu Song and Yu Liu and Hongsheng Li},
      year={2024},
      eprint={2402.00769},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@article{wang2023gen,
  title={Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising},
  author={Wang, Fu-Yun and Chen, Wenshuo and Song, Guanglu and Ye, Han-Jia and Liu, Yu and Li, Hongsheng},
  journal={arXiv preprint arXiv:2305.18264},
  year={2023}
}