Antoni Rosinol · John J. Leonard · Luca Carlone
Table of Contents
Clone repo with submodules:
git clone https://github.com/OceanYing/NeRF-SLAM.git --recurse-submodules
git submodule update --init --recursive
From this point on, use a virtual environment... Install torch (see here for other versions):
# CUDA 11.3
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
Pip install requirements:
pip install -r requirements.txt
pip install -r ./thirdparty/gtsam/python/requirements.txt
Compile ngp (you need cmake>3.22):
cmake ./thirdparty/instant-ngp -B build_ngp
cmake --build build_ngp --config RelWithDebInfo -j
Compile gtsam and enable the python wrapper:
cmake ./thirdparty/gtsam -DGTSAM_BUILD_PYTHON=1 -B build_gtsam
cmake --build build_gtsam --config RelWithDebInfo -j
cd build_gtsam
make python-install
Note: in step cmake --build build_gtsam --config RelWithDebInfo -j
, you may meet the error Issue23 in original NeRF-SLAM branch caused by the python parser(As have been told in Installation issues 3). You may change const std::vector<const gtsam::Matrix&>& to const std::vectorgtsam::Matrix& use & as pointer is not proper define of std::vector.
Finally install:
python setup.py install
This will just download one of the replica scenes:
./scripts/download_replica_sample.bash
python ./examples/slam_demo.py --dataset_dir=./datasets/Replica/office0 --dataset_name=nerf --buffer=100 --slam --parallel_run --img_stride=2 --fusion='nerf' --multi_gpu --gui
This repo also implements Sigma-Fusion: just change --fusion='sigma'
to run that.
This is a GPU memory intensive pipeline, to monitor your GPU usage, I'd recommend to use nvitop
.
Install nvitop in a local env:
pip3 install --upgrade nvitop
Keep it running on a terminal, and monitor GPU memory usage:
nvitop --monitor
If you consistently see "out-of-memory" errors, you may either need to change parameters or buy better GPUs :). The memory consuming parts of this pipeline are:
- Frame to frame correlation volumes (but can be avoided using on-the-fly correlation computation).
- Volumetric rendering (intrinsically memory intensive, tricks exist, but ultimately we need to move to light fields or some better representation (OpenVDB?)).
- Gtsam not working: check that the python wrapper is installed, check instructions here: gtsam_python. Make sure you use our gtsam fork, which exposes more of gtsam's functionality to python.
- Gtsam's dependency is not really needed, I just used to experiment adding IMU and/or stereo cameras, and have an easier interface to build factor-graphs. This didn't quite work though, because the network seemed to have a concept of scale, and it didn't quite work when updating poses/landmarks and then optical flow.
- Somehow the parser converts this to
const std::vector<const gtsam::Matrix&>&
, and I need to remove manually ingtsam/build/python/linear.cpp
the innerconst X& ...
, and also add<pybind11/stl.h>
because:
Did you forget to `#include <pybind11/stl.h>`?
@article{rosinol2022nerf,
title={NeRF-SLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields},
author={Rosinol, Antoni and Leonard, John J and Carlone, Luca},
journal={arXiv preprint arXiv:2210.13641},
year={2022}
}
This repo is BSD Licensed.
It reimplements parts of Droid-SLAM (BSD Licensed).
Our changes to instant-NGP (Nvidia License) are released in our fork of instant-ngp (branch feature/nerf_slam
) and
added here as a thirdparty dependency using git submodules.
This work has been possible thanks to the open-source code from Droid-SLAM and Instant-NGP, as well as the open-source datasets Replica and Cube-Diorama.
I have many ideas on how to improve this approach, but I just graduated so I won't have much time to do another PhD... If you are interested in building on top of this, feel free to reach out :) arosinol@mit.edu