Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image
Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiaojun Yuan.
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022
[arXiv] [Paper] [Supplementary Material]
(*: Equal contribution)
The code is tested with Ubuntu 16.04, PyTorch v1.5, CUDA 10.1 and cuDNN v7.6.
# create conda env
conda create -n layout python=3.6
# activate conda env
conda activate layout
# install pytorch
conda install pytorch==1.5.0 torchvision==0.6.0 cudatoolkit=10.1 -c pytorch
# install dependencies
pip install -r requirements.txt
Please download Structured3D dataset and our processed 2D line annotations. The directory structure should look like:
data
└── Structured3D
│── Structured3D
│ ├── scene_00000
│ ├── scene_00001
│ ├── scene_00002
│ └── ...
└── line_annotations.json
Please download SUN RGB-D dataset, our processed 2D line annotation for SUN RGB-D dataset, and layout annotations of NYUv2 303 dataset. The directory structure should look like:
data
└── SUNRGBD
│── SUNRGBD
│ ├── kv1
│ ├── kv2
│ ├── realsense
│ └── xtion
│── sunrgbd_train.json // our extracted 2D line annotations of SUN RGB-D train set
│── sunrgbd_test.json // our extracted 2D line annotations of SUN RGB-D test set
└── nyu303_layout_test.npz // 2D ground truth layout annotations provided by NYUv2 303 dataset
You can download our pre-trained models here:
- The model trained on Structured3D dataset.
- The model trained on SUN RGB-D dataset and NYUv2 303 dataset.
To train the model on the Structured3D dataset, run this command:
python train.py --model_name s3d --data Structured3D
To evaluate the model on the Structured3D dataset, run this command:
python test.py --pretrained DIR --data Structured3D
To train the model on the SUN RGB-D dataset and NYUv2 303 dataset, run this command:
# first fine-tune the model on the SUN RGB-D dataset
python train.py --model_name sunrgbd --data SUNRGBD --pretrained Structure3D_DIR --split all --lr_step []
# Then fine-tune the model on the NYUv2 subset
python train.py --model_name nyu --data SUNRGBD --pretrained SUNRGBD_DIR --split nyu --lr_step [] --epochs 10
To evaluate the model on the NYUv2 303 dataset, run this command:
python test.py --pretrained DIR --data NYU303
To predict the results of customized images, run this command:
python test.py --pretrained DIR --data CUSTOM
@inproceedings{NonCuboidRoom,
title = {Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image},
author = {Cheng Yang and
Jia Zheng and
Xili Dai and
Rui Tang and
Yi Ma and
Xiaojun Yuan},
booktitle = {WACV},
year = {2022}
}
The code is released under the MIT license. Portions of the code are borrowed from HRNet-Object-Detection and CenterNet.
We would like to thank Lei Jin for providing us the code for parsing the layout annotations in SUN RGB-D dataset.