Implementation of paper "NeRF-SOS: Any-View Self-supervised Object Segmentation from Complex Real-World Scenes " with PyTorch library.
We recommend users to use conda
to install the running environment. The following dependencies are required:
pytorch=1.7.0
torchvision=0.8.0
cudatoolkit=11.0
tensorboard=2.7.0
opencv
imageio
imageio-ffmpeg
configargparse
scipy
matplotlib
tqdm
mrc
lpips
To run our code on NeRF dataset, users need first download data from official cloud drive. Then extract package files according to the following directory structure:
├── configs
│ ├── ...
│
├── datasets
│ ├── nerf_llff_data
│ │ └── flower # downloaded llff dataset
│ │ └── fortress # downloaded llff dataset
| | └── ...
Then, please put the ``segments'' folder LINK under DATASET/SCENE/ (e.g., nerf_llff_data/flower/segments)
run the following command to generate training and testing data
cd data
python gen_dataset.py --config ../configs/flower_full.txt --data_path /PATH_TO_DATA/nerf_llff_data_fullres/flower/ --data_type llff
We provide a data sample for scene ``Flower'' can be found in the LINK, you can direct download it without any modification.
Prepared data on scene ``Fortress'' can be found in the LINK
If you wanna see the rendered masks using the self-supervised trained model, run:
bash scrits/eval.sh
If you wanna see the rendered masks (video format) using the self-supervised trained model, run:
bash scrits/eval_video.sh
After preparing datasets, users can train a NeRF-SOS by the following command:
bash scripts/flower_node0.sh
For more options, please check via the following instruction:
python run_nerf.py -h
While training, users can view logging information through tensorboard
:
tensorboard --logdir='./logs' --port <your_port> --host 0.0.0.0
When training is done, users can synthesize exhibition video by running:
bash scrits/eval.sh
support COLMAP poses