/Vid3D

Project Repository for "Vid3D: Synthesis of Dynamic 3D Scenes using 2D Video Diffusion"

Primary LanguagePythonMIT LicenseMIT

Vid3D: Synthesis of Dynamic 3D Scenes using 2D Video Diffusion

Rishab Parthasarathy1, Zachary Ankner1,2, Aaron Gokaslan2,3

1Massachusetts Institute of Technology, 2Databricks Mosaic Research, 3Cornell University

This repository contains the official implementation of Vid3D: Synthesis of Dynamic 3D Scenes using 2D Video Diffusion.

Abstract

A recent frontier in computer vision has been the task of 3D video generation, which consists of generating a time-varying 3D representation of a scene. To generate dynamic 3D scenes, current methods explicitly model 3D temporal dynamics by jointly optimizing for consistency across both time and views of the scene. In this paper, we instead investigate whether it is necessary to explicitly enforce multiview consistency over time, as current approaches do, or if it is sufficient for a model to generate 3D representations of each timestep independently. We hence propose a model, Vid3D, that leverages 2D video diffusion to generate 3D videos by first generating a 2D ”seed” of the video’s temporal dynamics and then independently generating a 3D representation for each timestep in the seed video. We evaluate Vid3D against two state-ofthe-art 3D video generation methods and find that Vid3D is achieves comparable results despite not explicitly modeling 3D temporal dynamics. We further ablate how the quality of Vid3D depends on the number of views generated per frame. While we observe some degradation with fewer views, performance degradation remains minor. Our results thus suggest that 3D temporal knowledge may not be necessary to generate high-quality dynamic 3D scenes, potentially enabling simpler generative algorithms for this task.

Examples

cat-wave.mp4
cat-wave-alt1.mp4
fox-game.mp4
fox-game6.mp4
squirrel-nut.mp4
squirrel-nut4.mp4
squirrel-saxophone.mp4
squirrel-saxophone5.mp4

Instructions:

  1. Install the requirements:
pip install -r requirements.txt
  1. Download the weights for multi-view generation (from the great V3D paper)
wget https://huggingface.co/heheyas/V3D/resolve/main/V3D.ckpt -O ckpts/V3D_512.ckpt
wget https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/resolve/main/svd_xt.safetensors -O ckpts/svd_xt.safetensors
  1. We provide scripts to run our code on a node with 8 GPUs, which must have 80 GB of RAM. First, generate the seed videos.
cd vid3d
bash scripts/seed.sh
  1. Generate multi-views of each frame.
bash scripts/frame_to_multi_view.sh
  1. Convert multi-views to Gaussian Splats.
bash scripts/multi_view_to_splat.sh
  1. Render the gaussian splats from varying angles.
bash scripts/render_splat.sh

Citation:

If you found our work useful, please consider citing us at

@article{parthasarathy2024vid3d,
  author    = {Rishab Parthasarathy and Zachary Ankner and Aaron Gokaslan},
  title     = {Vid3D: Synthesis of Dynamic 3D Scenes using 2D Video Diffusion},
  journal   = {arXiv preprint arXiv:2406.11196},
  year      = {2024}
}

Acknowledgements

We thank the authors of 3D Gaussian Splatting and V3D for the code bases that this project is based upon.

This project also began as a class project for the Advances in Computer Vision class at MIT, and we would like to thank Professors Sara Beery, Kaiming He, Mina Konakovic Lukovic, and Vincent Sitzmann for teaching the class, along with Joanna Materzynska and Emily Robinson for valuable feedback.