This a work in progress fork of Genmoai's txt2video model optimized to run on a single GPU Node with reduced VRAM.
It is quite capable with 48GB, but it should be to run with a single 24GB GPU now.
Do not exceed 61 frames and try 640x480. VRAM use mostly scales with frame count and resolution. Inference steps shouldn't change VRAM use, but time taken to create a video scales with steps. 100 steps seems ok and will likely take 15-25 minutes. Original source used 200 steps, but this will take about twice as long.
Windows not yet tested, but it probably can work? ¯\(ツ)/¯
If your system is already using VRAM for running a desktop you may need to lower settings further.
Mostly just shifting the vae, te, dit, etc. back and forth to cpu when not needed and using bfloat16 everywhere. This may require significant system RAM (~64GB) or may be extra slow if it has to revert to using pagefile if system RAM is <=32G since T5 and the DIT are still fairly large. Time to move the models back and forth is fairly small in relation to the inference time spent in the DIT steps.
Further optimization... Maybe bitsandbytes NF4. That might bring it down to 16GB or less, assuming it doesn't destroy output quality. May try to see if I can inject a first frame image to make it do img2video.
Blog | Hugging Face | Playground | Careers
A state of the art video generation model by Genmo.
grid_output.mp4
Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems. We’re releasing the model under a permissive Apache 2.0 license. Try this model for free on our playground.
Install using uv:
git clone https://github.com/genmoai/models
cd models
pip install uv
uv venv .venv
source .venv/bin/activate
uv pip install -e .
Download the weights from Hugging Face or via magnet:?xt=urn:btih:441da1af7a16bcaa4f556964f8028d7113d21cbb&dn=weights&tr=udp://tracker.opentrackr.org:1337/announce
to a folder on your computer.
Start the gradio UI with
python3 -m mochi_preview.gradio_ui --model_dir "<path_to_downloaded_directory>"
Or generate videos directly from the CLI with
python3 -m mochi_preview.infer --prompt "A hand with delicate fingers picks up a bright yellow lemon from a wooden bowl filled with lemons and sprigs of mint against a peach-colored background. The hand gently tosses the lemon up and catches it, showcasing its smooth texture. A beige string bag sits beside the bowl, adding a rustic touch to the scene. Additional lemons, one halved, are scattered around the base of the bowl. The even lighting enhances the vibrant colors and creates a fresh, inviting atmosphere." --seed 1710977262 --cfg-scale 4.5 --model_dir "<path_to_downloaded_directory>"
Replace <path_to_downloaded_directory>
with the path to your model directory.
Mochi 1 represents a significant advancement in open-source video generation, featuring a 10 billion parameter diffusion model built on our novel Asymmetric Diffusion Transformer (AsymmDiT) architecture. Trained entirely from scratch, it is the largest video generative model ever openly released. And best of all, it’s a simple, hackable architecture. Additionally, we are releasing an inference harness that includes an efficient context parallel implementation.
Alongside Mochi, we are open-sourcing our video AsymmVAE. We use an asymmetric encoder-decoder structure to build an efficient high quality compression model. Our AsymmVAE causally compresses videos to a 128x smaller size, with an 8x8 spatial and a 6x temporal compression to a 12-channel latent space.
Params Count |
Enc Base Channels |
Dec Base Channels |
Latent Dim |
Spatial Compression |
Temporal Compression |
---|---|---|---|---|---|
362M | 64 | 128 | 12 | 8x8 | 6x |
An AsymmDiT efficiently processes user prompts alongside compressed video tokens by streamlining text processing and focusing neural network capacity on visual reasoning. AsymmDiT jointly attends to text and visual tokens with multi-modal self-attention and learns separate MLP layers for each modality, similar to Stable Diffusion 3. However, our visual stream has nearly 4 times as many parameters as the text stream via a larger hidden dimension. To unify the modalities in self-attention, we use non-square QKV and output projection layers. This asymmetric design reduces inference memory requirements. Many modern diffusion models use multiple pretrained language models to represent user prompts. In contrast, Mochi 1 simply encodes prompts with a single T5-XXL language model.
Params Count |
Num Layers |
Num Heads |
Visual Dim |
Text Dim |
Visual Tokens |
Text Tokens |
---|---|---|---|---|---|---|
10B | 48 | 24 | 3072 | 1536 | 44520 | 256 |
The model requires at least 4 H100 GPUs to run. We welcome contributions from the community to reduce this requirement.
Genmo video models are general text-to-video diffusion models that inherently reflect the biases and preconceptions found in their training data. While steps have been taken to limit NSFW content, organizations should implement additional safety protocols and careful consideration before deploying these model weights in any commercial services or products.
Under the research preview, Mochi 1 is a living and evolving checkpoint. There are a few known limitations. The initial release generates videos at 480p today. In some edge cases with extreme motion, minor warping and distortions can also occur. Mochi 1 is also optimized for photorealistic styles so does not perform well with animated content. We also anticipate that the community will fine-tune the model to suit various aesthetic preferences.
@misc{genmo2024mochi,
title={Mochi},
author={Genmo Team},
year={2024}
}