/ComfyUI-MagCache

The official code that integrates MagCache (Fast Video Generation with Magnitude-Aware Cache) with ComfyUI.

Primary LanguagePythonApache License 2.0Apache-2.0

ComfyUI-MagCache

🫖 Introduction

Magnitude-aware Cache (MagCache) is a training-free caching approach. It estimates the fluctuating differences among model outputs across timesteps based on the robust magnitude observations, and thereby accelerating the inference using the error modeling mechanism and adaptive cache strategy. MagCache works well for Video Diffusion Models, Image Diffusion models. For more details and results, please visit our project page and code.

MagCache has now been integrated into ComfyUI and is compatible with the ComfyUI native nodes. ComfyUI-MagCache is easy to use, simply connect the MagCache node with the ComfyUI native nodes for seamless usage.

🔥 Latest News

  • If you like our project, please give us a star ⭐ on GitHub for the latest update.
  • [2025/6/10] 🔥 Support Wan2.1 T2V&I2V, HunyuanVideo T2V, FLUX-dev T2I

Installation

  1. Go to comfyUI custom_nodes folder, ComfyUI/custom_nodes/
  2. git clone https://github.com/zehong-ma/ComfyUI-MagCache.git
  3. Go to ComfyUI-MagCache folder, cd ComfyUI-MagCache/
  4. pip install -r requirements.txt
  5. Go to the project folder ComfyUI/ and run python main.py

Usage

Download Model Weights

Please first to prepare the model weights in ComfyUI format by referring to the follow links:

MagCache

To use MagCache node, simply add MagCache node to your workflow after Load Diffusion Model node or Load LoRA node (if you need LoRA). Generally, MagCache can achieve a speedup of 2x to 3x with acceptable visual quality loss. The following table gives the recommended magcache_thresh, retention_ratio and magcache_K ​for different models:

Models magcache_thresh retention_ratio magcache_K
FLUX 0.24 0.1 5
HunyuanVideo-T2V 0.24 0.2 6
Wan2.1-T2V-1.3B 0.12 0.2 4
Wan2.1-T2V-14B 0.24 0.2 6
Wan2.1-I2V-480P-14B 0.24 0.2 6
Wan2.1-I2V-720P-14B 0.24 0.2 6

If the image/video after applying MagCache is of low quality, please reduce magcache_thresh and magcache_K.

The demo workflows (flux, hunyuanvideo, wan2.1_t2v and wan2.1_i2v) are placed in examples folder. In our experiments, the videos generated by Wan2.1 are not as high-quality as those produced by the original unquantized version.

Compile Model

To use Compile Model node, simply add Compile Model node to your workflow after Load Diffusion Model node or MagCache node. Compile Model uses torch.compile to enhance the model performance by compiling model into more efficient intermediate representations (IRs). This compilation process leverages backend compilers to generate optimized code, which can significantly speed up inference. The compilation may take long time when you run the workflow at first, but once it is compiled, inference is extremely fast.

Acknowledgments

Thanks to ComfyUI-TeaCache, ComfyUI, ComfyUI-MagCache, MagCache, TeaCache, HunyuanVideo, FLUX, and Wan2.1.