This is the code for our paper "GRAINS: Generative Recursive Autoencoders for INdoor Scenes".
Project webpage here.
(1) Because of the SUNCG dataset problem, we have removed the pretraned models and the room_wcf data file. Here we provide the data format definition for the room_wcf file used in our code and a sample file which can be visualized with this script. Please follow this format to create your own room_wcf file from other indoor scene dataset to use our code.
(2) We have updated the training code to automatically tune the batch size based on the training set. As said in our paper, it works well with batch size being about 1/10 of the training set size. A too small batch size will cause model collapse and loss jumping, then the generated scenes are mostly the same with each other and not plausible.
The code has been tested on the following. To re-run our code, we recommend the below softwares/tools to work with:
(a) Python 2.7 and Pytorch 0.3.1, OR
(b) Python 3.6/3.7 and Pytorch >1.0, and
(c) MATLAB (>2017a)
The best way is to install the latest Python and Pytorch versions is via Anaconda.
- Download your version (depending on your OS) of anaconda from here.
- Make sure your conda is setup properly. This is how you do it:
export PATH="............./anaconda3/bin:$PATH"
- The following command at the terminal prompt should not throw any error
conda
- Create a virtual environment called "GRAINS".
conda create --name GRAINS
- Activate your virtual env:
source activate GRAINS
You are now setup with the working environments.
Make a local copy of this repository using
git clone https://github.com/ManyiLi12345/GRAINS.git
There is an ongoing legal dispute with using the training dataset made use of in our work. Follow the below at your own risk.
We use indoor scenes represented as herarchies for the training. To create the training data, first download the original Dataset and extract house
, object
, room_wcf
folder under the path ./0-data/SUNCG/
.
run ./1-genSuncgDataset/main_gendata.m
The output is saved in ./0-data/1-graphs
.
run ./2-genHierarchies/main_buildhierarchies.m
The output is saved in ./0-data/2-hierarchies
.
run ./3-datapreparation/main_genSUNCGdataset.m
The output is saved in ./0-data/3-offsetrep
.
run ./4-genPytorchData/main_genprelpos_pydata.m
The output is saved in ./0-data/4-pydata
.
run ./4-training/train.py
It loads the training set from ./0-data/4-pydata
. The trained model will be saved in ./0-data/models/
.
run ./4-training/test.py
It loads the trained model in ./0-data/models/
and randomly generate 1000 scenes. The output is a set of scenes represented as hierarchies, saved as ./0-data/4-pydata/generated_scenes.mat
.
run ./5-reconVAE/main_recon.m
It reconstructs the object OBBs in each scene from the generated hierarchy. The topview images are saved in ./0-data/5-generated_scenes/images/
.
The training part of our code is built upon GRASS.
If you find this work useful for your research, please cite GRAINS using the bibtex below:
@article{li2019grains,
title={Grains: Generative recursive autoencoders for indoor scenes},
author={Li, Manyi and Patil, Akshay Gadi and Xu, Kai and Chaudhuri, Siddhartha and Khan, Owais and Shamir, Ariel and Tu, Changhe and Chen, Baoquan and Cohen-Or, Daniel and Zhang, Hao},
journal={ACM Transactions on Graphics (TOG)},
volume={38},
number={2},
pages={12},
year={2019},
publisher={ACM}
}