/DM-NeRF

Primary LanguagePython

DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images

This repository contains the implementation of the paper:

DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images
Bing Wang*, Lu Chen*, Bo Yang
Paper | Supplementary | Video

Video (Youtube)

Decomposition and Manipulation:

 

Qualitative Results

Scene Decomposition



Object Manipulation

Rigid Transformation



Deformable Manipulation


Instance 3D Reconstruction from Posed Images

Installation

python >=3.7
pip install pytorch==1.8.1 torchVision==0.9.1 torchaudio===0.8.1
pip install -r environment.txt

Datasets

To evaluate our model or train a new model from scratch, you have to obtain the respective dataset. In this paper, we consider 3 different datasets:

DM-SR

Replica

ScanNet

Training

After you set all parameters you want, you can train model use one of blow command, for example:

If you want use full of decomposition function, you can run commands like:

CUDA_VISIBLE_DEVICES=0 python -u train_dmsr.py --config configs/train/dmsr/study.txt

If you do not decompose emptiness area, you can delete penalize parameter in config file, and run above command.

Evaluation

Decomposition

We used PSNR, SSIM, LPIPS, and mAPs to evaluate our tasks:

For decomposition operation:

You need to add render=True and log_time="your log folder name" into config txt.

And then run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/test/dmsr/study.txt

Manipulation

Manipulation operation includes two parts, evaluation and demo generation:

We only provide manipulated ground truth of DM-SR dataset for manipulation evaluation.

Change render = True to mani_eval = True, add target_label and editor_mode to assign which object manipulated and

which manipulated operation you want, specific format can renference ./configs/manipulation/dmsr/editor_multi/study.txt.

You can run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/manipulation/dmsr/manipulation_multi/study.txt

Change render = True to mani_demo = True, edit objs_info.json to assign objects manipulation, file path is

./data/dmsr/study/objs_info.json.

You can move view poses by given view_id = null, but given views a number in config file.

More explanation, ins_map is a global matching list statistic from matching_logs.json.

You can run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/test/dmsr/study.txt

Baseline

SOTA method Mask R-CNN

Citation

If you find our work useful in your research, please consider citing:

Acknowledgement

In this project we use (parts of) the implementations of the following works:

We thank the respective authors for open sourcing their methods.