By Chen Liu, Jiajun Wu, Pushmeet Kohli, and Yasutaka Furukawa
This paper addresses the problem of converting a rasterized floorplan image into a vector-graphics representation. Our algorithm significantly outperforms existing methods and achieves around 90% precision and recall, getting to the range of production-ready performance. To learn more, please see our ICCV 2017 paper or visit our project website.
This code implements the algorithm described in our paper in Torch7.
- Please install the latest Torch
- Please install Python 2.7
To use our trained model, please first download it from Google Drive, and put it under folder "checkpoint/" (or specify the its path via option -loadModel="path to the downloaded model").
Our model is fine-tuned based on the pose estimation network introduced in the paper, "Human pose estimation via Convolutional Part Heatmap Regression". You can downloaded their model here (the MPII one), and put it under folder "PoseEstimation/" (or specify the its path via option -loadPoseEstimationModel)
Our vector-graphics annotation is under "data/" folder. Lists of (raster floorplan image path, vector-graphics annotation path) pairs can be found in either "train.txt", "val.txt", or "test.txt'.
Each row in vector graphics annotations contains (x_min, y_min, x_max, y_max, category, dump_1, dump_2). Category can be either a wall, a door (opening in the paper), a specific icon type, or a specific room type. For walls and doors, two points, (x_min, y_min) and (x_max, y_max), form a line. For icons, x_min, y_min, x_max, and y_max specify a rectangle. For rooms, however, x_min, y_min, x_max, and y_max are unfortunately not for the bounding box of the room, as a room can be of arbitrary shape instead of a rectangle. So, x_min, y_min, x_max, and y_max just denote an arbitrary region which falls inside the room. Please refer to the data loader code to see how to process such annotations.
The link to 100,000+ vector-graphics representation generated by our algorithm is coming soon. Please contact me (chenliu@wustl.edu) if you want to use them right now.
To train the network from the pretrained pose estimation network, simply run
th main.lua -loadPoseEstimationModel "path to the downloaded pose estimation model"
To load our trained model and resume training, please run
th main.lua -loadModel "path to the downloaded pretrained model"
Here are som useful options for the main script:
- -batchSize specifies the batch size
- -LR specifies the learning rate
- -nEpochs specifies the number of epochs
- -checkpointEpochInterval specifies the number of training epochs between two checkpoints (useful if you want to save less number of checkpoints instead of saving one checkpoint for every epoch)
- useCheckpoint specifies how the training resumes
- -1: starting from the beginning even when checkpoints previously trained are found
- 0 (default) resuming from checkpoints if found
- n (n > 0) resuming from the nth checkpoint
To make prediction on a floorplan image, run
th predict.lua -loadModel "model path" -floorplanFilename "path to the floorplan image" -outputFilename "output filename"
Note that the above script will produce the vectorization result (saved in ".txt" file), the rendering image (saved in ".png" file), and a text file which could be used for generating 3D models (saved in "_popup.txt").
To evaluate performance on the benchmark, run
th evaluate.lua -loadModel "model path" -resultPath "path to save results"
Automatic 3D model generation based on our vectorization results is implemented in both C++ (under folder popup/) and Python (under folder rendering/). (Both are finished in a hurry, so please let us know if you have any questions.)
For the C++ code, run the following:
cd popup/code/
cmake .
make
./popup_cli ../data/floorplan_1.txt
The data file (e.g., popup/data/floorplan_1.txt), which could be generated by predict.lua, has the following format:
width height
the number of walls
(Wall descriptions)
x_1, y_1, x_2, y_2, room type on the left, room type on the right
...
(Opening descriptions)
x_1, y_1, x_2, y_2, 'door', dummy, dummy
(Icon descriptions)
x_1, y_1, x_2, y_2, icon type, dummy, dummy
The Python code is based on Panda3D. First enter folder rendering/, and then either run:
python viewer.py
to view a 3D model, or run:
python rendering.py
to render one view of the 3D model given camera pose. Please check the code to see how to specify the model to view and how to render different views.
If you have any questions, please contact me at chenliu@wustl.edu.