/LargeScaleNeRFPytorch

1. Non-official implementation of Block-NeRF and Mega-NeRF in Pytorch. 2. Train your large-scale NeRF in the wild. 3. Weekly classified NeRF literature.

Primary LanguagePythonMIT LicenseMIT

We track weekly NeRF papers and classify them. All previous published NeRF papers have been added to the list. We provide an English version and a Chinese version. We welcome contributions and corrections via PR.

We also provide an excel version (the meta data) of all NeRF papers, you can add your own comments or make your own paper analysis tools based on the structured meta data.

Large-scale Neural Radiance Fields in Pytorch

All Contributors

1. Introduction

Since I changed my research direction, the updates on the following codes might be slow. Many improvements in this repo are not going to be published at this moment, so feel free to use it. The weekly NeRF would be updated as usual.

This project aims for benchmarking several state-of-the-art large-scale radiance fields algorithms, not restricted to the original Block-NeRF algorithm.

The Block-NeRF builds the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.

This project is the non-official implementation of Block-NeRF. You are expected to get the following results in this repository:

  1. Large-scale NeRF training. The current results are as follows:

Training splits:

oct2_124_300_frames_trim.mp4

Rotation:

rotation.mov
  1. SOTA custom scenes. Reconstruction SOTA NeRFs based on your collected photos. Here is a reconstructed video of my work station:
sm01_04.mp4
  1. Google Colab support. Run trained Block-NeRF on Google Colab with detailed visualizations (unfinished yet):

Open In Colab

The other features of this project would be:

  • PyTorch Implementation. The official Block-NeRF paper uses tensorflow and requires TPUs. However, this implementation only needs PyTorch.

  • GPU efficient. We ensure that almost all our experiments can be carried on eight NVIDIA 2080Ti GPUs.

  • Quick download. We host many datasets on Google drive so that downloading becomes much faster.

  • Uniform data format. The original Block-NeRF paper requires downloading tons of data from Google Cloud Platform. This repo provide processed data and convenient scripts. We provides a uniform data format that suits many datasets of large-scale neural fields.

  • State-of-the-art performance. This project produces state-of-the-art rendering quality with better efficiency.

  • Quick validation. We provide quick validation tools to evaluate your ideas so that you don't need to train on the full Block-NeRF dataset.

  • Open research. Along with this project, we aim to developping a strong community working on this. We welcome you to joining us (if you have a Wechat, feel free to add my Wechat ytc407). The contributors of this project are listed at the bottom of this page.

Hope our efforts could help your research or projects!

2. News

  • [2022.12.23] Released several weeks' NeRF. Too many papers pop out these days so the update speed is slow.
  • [2022.9.12] Training Block-NeRF on the Waymo dataset, reaching PSNR 24.3.
  • [2022.8.31] Training Mega-NeRF on the Waymo dataset, loss still NAN.
  • [2022.8.24] Support the full Mega-NeRF pipeline.
  • [2022.8.18] Support all previous papers in weekly classified NeRF.
  • [2022.8.17] Support classification in weekly NeRF.
  • [2022.8.16] Support evaluation scripts and data format standard. Getting some results.
  • [2022.8.13] Add estimated camera pose and release a better dataset.
  • [2022.8.12] Add weekly NeRF functions.
  • [2022.8.8] Add the NeRF reconstruction code and doc for custom purposes.
  • [2022.7.28] The data preprocess script is finished.
  • [2022.7.20] This project started!

3. Installation

Expand / collapse installation steps.
  1. Create conda environment.
    conda create -n nerf-block python=3.9
  2. Install tensorflow, pytorch and other libs. Our version: tensorflow with CUDA11.7.
    pip install --upgrade pip
    pip install -r requirements.txt
    pip install tensorflow 
    pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
    conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
  3. Install other libs used for reconstructing custom scenes, which is only needed when you need to build your scenes.
    sudo apt-get install colmap
    sudo apt-get install imagemagick  # required sudo accesss
    conda install pytorch-scatter -c pyg  # or install via https://github.com/rusty1s/pytorch_scatter
    You can use laptop version of COLMAP as well if you do not have access to sudo access on your server. However, we found if you do not set up COLMAP parameters properly, you would not get the SOTA performance.

4. Large-scale NeRF on the public datasets

We provide implementations for two algorithms: Block-NeRF and Mega-NeRF. Most of the Mega-NeRF implementations is from the official Mega-NeRF repo while we support Waymo dataset training, visualizations and fix some Mega-NeRF bugs. In the following, the Mega-NeRF commands are commented to prevent confusions.

We provide useful debugging commands in many scripts. Debug commands require a single GPU card only and may run slower than the standard commands. You can use the standard commands instead for conducting experiments and comparisons. A sample bash file is:

# arguments
ARGUMENTS HERE  # we provide you sampled arguments with explanations and options here.
# for debugging, uncomment the following line when debugging
# DEBUG COMMAND HERE
# for standard training, comment the following line when debugging
STANDARD TRAINING COMMAND HERE

Click the following sub-section titles to expand / collapse steps.

4.1 Download processed data and pre-trained models.

What you should know before downloading the data:

(1) Disclaimer: you should ensure that you get the permission for usage from the original data provider. One should first sign the license on the official waymo webiste to get the permission of downloading the Waymo data. Other data should be downloaded and used without obeying the original licenses.

(2) Our processed waymo data is significantly smaller than the original version (19.1GB vs. 191GB) because we store the camera poses instead of raw ray directions. Besides, our processed data is more friendly for Pytorch dataloaders. Furthermore, the processed data support training by Mega-NeRF and Block-NeRF both.

Download the data and pretrained models in the Google Drive. You may use gdown to download the files via command lines.

If you are interested in processing the raw waymo data on your own, please refer to this doc.

The downloaded data would look like this:

data
   |——————pytorch_waymo_dataset                     // the root folder for pytorch waymo dataset
   |        └——————cam_info.json                    // extracted cam2img information in dict.
   |        └——————coordinates.pt                   // global camera information used in Mega-NeRF
   |        └——————train                            // train data
   |        |         └——————metadata               // meta data per image (camera information, etc)
   |        |         └——————rgbs                   // rgb images
   |        |         └——————split_block_train.json // split block informations
   |        |         └——————train_all_meta.json    // all meta informations in train folder
   |        └——————val                              // val data with the same structure as train

If you wish to run the Mega-NeRF algorithm, you will need to create masks prior to the training or evaluation. Please refer to this doc for more details. You can download other Mega-NeRF benchmarks following this doc.

4.2 Run pretrained models.

We recommand you to eval the pretrained models first before you train the models. In this way, you can quickly see the results of our provided models and help you rule out many environmental issues. Run the following script to eval the pre-trained models, which should be downloaded from the previous section 4.1.

bash scripts/block_nerf_eval.sh
# bash scripts/mega_nerf_eval.sh  # for the Mega-NeRF algorithm. The rendered images would be placed under ${EXP_FOLDER}, which is set to data/mega/${DATASET_NAME}/exp_logs by default. The sample output log by running this script can be found at [docs/sample_logs/mega_nerf_eval.txt](docs/sample_logs/mega_nerf_eval.txt).
4.3 Train sub-modules.

Run the following commands to train the sub-modules (the blocks):

export BLOCK_INDEX=0
bash scripts/block_nerf_train.sh ${BLOCK_INDEX}                   # For the Block-NeRF algorithm. The training tensorboard log is at the logs/. Using "tensorboard dev --logdir logs/" to see the tensorboard log. 

# bash scripts/mega_nerf_train_sub_modules.sh ${BLOCK_INDEX}      # For the Mega-NeRF algorithm. The sample training log is at[docs/sample_logs/mega_nerf_train_sub_modules.txt](docs/sample_logs/mega_nerf_train_sub_modules.txt) . You can also train multiple modules simutaneously via the [parscript](https://github.com/mtli/parscript) to launch all the training procedures simutaneuously. I personally don't use parscript but use the slurm launching scripts to launch all the required modules. The training time without multi-processing is around one day.

# If you are running the Mega-NeRF algorithm, you need to merge the trained modules:
# ```bash
# bash scripts/merge_sub_modules.sh
# ```
# The sample log can be found at [docs/sample_logs/merge_sub_modules.txt](docs/sample_logs/merge_sub_modules.txt).

5. Build your custom large-scale NeRF

Expand / collapse steps for building custom NeRF world.
  1. Put your images under data folder. The structure should be like:

    data
       |——————Madoka          // Your folder name here.
       |        └——————source // Source images should be put here.
       |                 └——————---|1.png
       |                 └——————---|2.png
       |                 └——————---|...

    The sample data is provided in our Google drive folder. The Madoka and Otobai can be found at this link.

  2. Run COLMAP to reconstruct scenes. This would probably cost a long time.

    python tools/imgs2poses.py data/Madoka

    You can replace data/Madoka by your data folder. If your COLMAP version is larger than 3.6 (which should not happen if you use apt-get), you need to change export_path to output_path in the colmap_wrapper.py.

  3. Training NeRF scenes.

    python run.py --config configs/custom/Madoka.py

    You can replace configs/custom/Madoka.py by other configs.

  4. Validating the training results to generate a fly-through video.

    python run.py --config configs/custom/Madoka.py --render_only --render_video --render_video_factor 8

6. Citations & acknowledgements

The original paper Block-NeRF and Mega-NeRF can be cited as:

 @InProceedings{Tancik_2022_CVPR,
    author    = {Tancik, Matthew and Casser, Vincent and Yan, Xinchen and Pradhan, Sabeek and Mildenhall, Ben and Srinivasan, Pratul P. and Barron, Jonathan T. and Kretzschmar, Henrik},
    title     = {Block-NeRF: Scalable Large Scene Neural View Synthesis},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {8248-8258}
}

@inproceedings{turki2022mega,
  title={Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs},
  author={Turki, Haithem and Ramanan, Deva and Satyanarayanan, Mahadev},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={12922--12931},
  year={2022}
}

We refer to the code and data from DVGO, Mega-NeRF, nerf-pl and SVOX2, thanks for their great work!

Contributors ✨

Thanks goes to these wonderful people (emoji key):


Zelin Zhao

💻 🚧

EZ-Yang

💻

Alex-Zhang

🐛

Fan Lu

🐛

MaybeShewill-CV

🐛

This project follows the all-contributors specification. Contributions of any kind welcome!