/stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts

Primary LanguagePythonApache License 2.0Apache-2.0

stable-diffusion-videos

Try it yourself in Colab: Open In Colab

Example - morphing between "blueberry spaghetti" and "strawberry spaghetti"

berry_good_spaghetti.2.mp4

How it Works

The Notebook/App

The in-browser Colab demo allows you to generate videos by interpolating the latent space of Stable Diffusion.

You can either dream up different versions of the same prompt, or morph between different text prompts (with seeds set for each for reproducibility).

The app is built with Gradio, which allows you to interact with the model in a web app. Here's how I suggest you use it:

  1. Use the "Images" tab to generate images you like.

    • Find two images you want to morph between
    • These images should use the same settings (guidance scale, scheduler, height, width)
    • Keep track of the seeds/settings you used so you can reproduce them
  2. Generate videos using the "Videos" tab

    • Using the images you found from the step above, provide the prompts/seeds you recorded
    • Set the num_walk_steps - for testing you can use a small number like 3 or 5, but to get great results you'll want to use something larger (60-200 steps).
    • You can set the output_dir to the directory you wish to save to

Python Package

Setup

Install the package

pip install stable_diffusion_videos

Authenticate with Hugging Face

huggingface-cli login

Programatic Usage

from stable_diffusion_videos import walk

walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    output_dir='dreams',     # Where images/videos will be saved
    name='animals_test',     # Subdirectory of output_dir where images/videos will be saved
    guidance_scale=8.5,      # Higher adheres to prompt more, lower lets model take the wheel
    num_steps=5,             # Change to 60-200 for better results...3-5 for testing
    num_inference_steps=50, 
    scheduler='klms',        # One of: "klms", "default", "ddim"
    disable_tqdm=False,      # Set to True to disable tqdm progress bar
    make_video=True,         # If false, just save images
    use_lerp_for_text=True,  # Use lerp for text embeddings instead of slerp
    do_loop=False,           # Change to True if you want last prompt to loop back to first prompt
)

Run the App Locally

from stable_diffusion_videos import interface

interface.launch()

Credits

This work built off of a script shared by @karpathy. The script was modified to this gist, which was then updated/modified to this repo.

Contributing

You can file any issues/feature requests here

Enjoy 🤗

Extras

Upsample with Real-ESRGAN

You can also 4x upsample your images with Real-ESRGAN!

First, you'll need to install it...

pip install realesrgan

Then, you'll be able to use upsample=True in the walk function, like this:

from stable_diffusion_videos import walk

walk(['a cat', 'a dog'], [234, 345], upsample=True)

The above may cause you to run out of VRAM. No problem, you can do upsampling separately.

To upsample an individual image:

from stable_diffusion_videos import PipelineRealESRGAN

pipe = PipelineRealESRGAN.from_pretrained('nateraw/real-esrgan')
enhanced_image = pipe('your_file.jpg')

Or, to do a whole folder:

from stable_diffusion_videos import PipelineRealESRGAN

pipe = PipelineRealESRGAN.from_pretrained('nateraw/real-esrgan')
pipe.upsample_imagefolder('path/to/images/', 'path/to/output_dir')