Auto1111 extension implementing various text2video models, such as ModelScope and VideoCrafter, using only Auto1111 webui dependencies and downloadable models (so no logins required anywhere)
6 GBs vram should be enough to run on GPU with low vram vae on at 256x256 (and we are already getting reports of people launching 192x192 videos with 4gbs of vram). 24 frames long 256x256 video definitely fits into 12gbs of NVIDIA GeForce RTX 2080 Ti, or if you have a Torch2 attention optimization supported videocard, you can fit the whopping 125 frames (8 seconds) long video into the same 12 GBs of VRAM! 250 frames (16 seconds) in the same conditions take 20 gbs.
Prompt: best quality, anime girl dancing
exampleUntitled.mp4
We will appreciate any help with this extension, especially pull-requests.
VideoCrafter runs with around 9.2 GBs of VRAM with the settings set on Default.
Update 2023-03-27: VAE settings and "Keep model in VRAM" moved to general webui setting under 'ModelScopeTxt2Vid' section.
Update 2023-03-26: prompt weights implemented! (ModelScope only yet, as of 2023-04-05)
Update 2023-04-05: added VideoCrafter support, renamed the extension to plainly 'sd-webui-text2video'
Update 2023-04-13: in-framing/in-painting support: allows to 'animate' an existing pic or even seamlessly loop the videos!
Update 2023-04-15: MEGA-UPDATE: Torch2/xformers optimizations, possible to make 125 frames long video on 12 gbs of VRAM. CPU offloading doesn't happen now if keep_pipe_in_vram is checked.
Update 2023-04-16: WebAPI is available!
Prompt: cinematic explosion by greg rutkowski
vid.mp4
Prompt: really attractive anime girl skating, by makoto shinkai, cinematic lighting
gosh.mp4
'Continuing' an existing image
Prompt: best quality, astronaut dog
egUntitled.mp4
Prompt: explosion
expl.mp4
In-painting and looping back the videos
Prompt: nuclear explosion
galaxybrain.mp4
Prompt: best quality, lots of cheese
matcheeseUntitled.mp4
Prompt: anime 1girl reimu touhou
working.mp4
Download the following files from the original HuggingFace repository. Alternatively, download half-precision fp16 pruned weights (they are smaller and use less vram on loading):
- VQGAN_autoencoder.pth
- configuration.json
- open_clip_pytorch_model.bin
- text2video_pytorch_model.pth
And put them in stable-diffusion-webui/models/ModelScope/t2v
. Create those 2 folders if they are missing.
Download pretrained T2V models either via this link or download the pruned half precision weights, and put the model.ckpt
in models/VideoCrafter/model.ckpt
.
Thanks to https://github.com/ExponentialML/Text-To-Video-Finetuning you can fine-tune your models!
To utilize a fine-tuned model here, use this script which will convert the Diffusers-formatted model that repo outputs into the original weights format.
Example of a fine-tuned model: Animov-0.1 by strangeman3107. The converted weights for this model reside here.
w.mp4
txt2vid with img2vid
vid2vid
HuggingFace space:
https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis
The model PyTorch implementation from ModelScope:
https://github.com/modelscope/modelscope/tree/master/modelscope/models/multi_modal/video_synthesis
Google Colab from the devs:
https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing
Github: