This paper proposes a deep neural architecture, PlaneR-CNN, that detects arbitrary number of planes, and reconstructs piecewise planar surfaces from a single RGB image. The code is implemented using PyTorch.
Clone repository:
git clone https://github.com/NVlabs/planercnn.git
Please use Python 3. Create an Anaconda environment and install the dependencies:
conda create --name planercnn
conda activate planercnn
conda install -y pytorch=0.4.1
conda install pip
pip install -r requirements.txt
Equivalently, you can use Python virtual environment to manage the dependencies:
pip install virtualenv
python -m virtualenv planercnn
source planercnn/bin/activate
pip install -r requirements.txt
Note: If you are doing in colab then go with cuda8.0 , torch 0.4.0 , gcc 5+ for building and upgrade to torch 0.4.1 for evaluate/training.
Please note that, the Mask R-CNN backbone does not support cuda10.0 and gcc versions higher than 7. If you have troubles compiling these two libraries, try to downgrade PyTorch to 0.4.0 before compilation and upgrade back to 0.4.1 after compilation. You might also want to find more information on their original repository.
Models are saved under checkpoint/. You can download our trained model from here, and put it under checkpoint/ if you want to fine-tune it or run inferences.
Backup gdrive link - here
In this project, plane parameters are of absolute scale (in terms of meters). Each plane has three parameters, which equal to plane_normal * plane_offset. Suppose plane_normal is (a, b, c) and plane_offset is d, every point (X, Y, Z) on the plane satisfies, aX + bY + cZ = d. Then plane parameters are (a, b, c)*d. Since plane normal is a unit vector, we can extract plane_normal and plane_offset from their multiplication.
python evaluate_original.py --methods=f --suffix=warping_refine --dataset=inference --customDataFolder=example_images
Results are saved under "test/inference/". Besides visualizations, plane parameters (#planes x 3) are saved in "*_plane_parameters_0.npy" and plane masks (#planes x 480 x 640) are saved in "*_plane_masks_0.npy".
Please put your images (.png or .jpg files), and camera intrinsics under a folder ($YOUR_IMAGE_FOLDER). The camera parameters should be put under a .txt file with 6 values (fx, fy, cx, cy, image_width, image_height) separately by a space. If the camera intrinsics is the same for all images, please put these parameters in camera.txt. Otherwise, please add a separate intrinsics file for each image, and name it the same with the image (changing the file extension to .txt). And then run:
install IP Webcam app in your mobile,start server and give the ip address at 698th line
python evaluate_realtime.py --methods=f --suffix=warping_refine --dataset=inference
Please first download the ScanNet dataset (v2), unzip it to "$ROOT_FOLDER/scans/", and extract image frames from the .sens file using the official reader.
Then download our plane annotation from here, and merge the "scans/" folder with "$ROOT_FOLDER/scans/". (If you prefer other locations, please change the paths in datasets/scannet_scene.py.)
After the above steps, ground truth plane annotations are stored under "$ROOT_FOLDER/scans/scene*/annotation/". Among the annotations, planes.npy stores the plane parameters which are represented in the global frame. Plane segmentation for each image view is stored under segmentation/.
To generate such training data on your own, please refer to data_prep/parse.py. Please refer to the README under data_prep/ for compilation.
Besides scene-specific annotation under each scene folder, please download global metadata from here, and unzip it to "$ROOT_FOLDER". Metadata includes the normal anchors (anchor_planes_N.npy) and invalid image indices caused by tracking issues (invalid_indices_*.txt).
Backup gdrive link - plane annotations - scannet_5_scenes.zip - here Global metadatas - metadata.zip - here
please refer tree.md
python train_planercnn.py --restore=1 --suffix=warping_refine --dataFolder=data_prep/Data/
options:
--restore:
- 0: training from scratch (not tested)
- 1 (default): resume training from saved checkpoint
- 2: training from pre-trained mask-rcnn model
--suffix (the below arguments can be concatenated):
- '': training the basic version
- 'warping': with the warping loss
- 'refine': with the refinement network
- 'refine_only': train only the refinement work
- 'warping_refine_after': add the warping loss after the refinement network instead of appending both independently
--anchorType:
- 'normal' (default): regress normal using 7 anchors
- 'normal[k]' (e.g., normal5): regress normal using k anchors, normal0 will regress normal directly without anchors
- 'joint': regress final plane parameters directly instead of predicting normals and depthmap separately
Temporary results are written under test/ for debugging purposes.
please refer tree_custom_data.md
Please refer the tree.md file and prepare dataset
To train on custom data, you need a list of planes, where each plane is represented using three parameters (as explained above) and a 2D binary mask. In our implementation, we use one 2D segmentation map where pixels with value i belong to the ith plane in the list. The easiest way is to replace the ScanNetScene class with something interacts with your custom data. Note that, the plane_info, which stores some semantic information and global plane index in the scene, is not used in this project. The code is misleading as global plane indices are read from plane_info here, but they are used only for debugging purposes.
- pending - still i didnt work on that part
- Working - I am working on that part
- Completed/Verified - its completely working and verified
python train_planercnn_custom.py --restore=1 --suffix=warping_refine --dataFolder=data_prep/Data/
To evaluate the performance against existing methods, please run:
python evaluate.py --methods=f --suffix=warping_refine
Options:
--methods:
- f: evaluate PlaneRCNN (use --suffix and --anchorType to specify configuration as explained above)
- p: evaluate PlaneNet
- e: evaluate PlaneRecover
- t: evaluate MWS (--suffix=gt for MWS-G)
A. Scannet Training from checkpoint(given in repo) for other scannet images - completed
It was observered that we need to set the depth error to 0.3 (within limits) so that further training takes place without loss and there will be no "nan" and breaking of training.
Was able to train for 17 epcohs from pretrained checkpoint
Was able to inference the images after training
Getting annotations from parse.py: -verified/completed 1. Go to planenet set a conda env with python2.7 tensorflow==1.13.0 , opencv - do everything in data_preparation/readme.md( install Openmesh from here https://github.com/TheWebMonks/meshmonk/blob/master/docs/ubuntu.md#installing-openmesh if you need) - while running parse.py change 696th line to this cmd = './Renderer/Renderer --scene_id=' + scene_id + ' --root_folder=' + ROOT_FOLDER - now you can get segmentation, plane_info.npy, planes.npy
B. Scannet Training from scratch - (working) It was observered that while training from scratch and depth error should be within 0.3 to keep the losses minimum, how ever during wrong initalization some times while training from scratch i have observered heavy losses. But mosty if depth error is controlled it will train without failing. However i trained only for certain epochs(4 or 5) so was not able to inference the images beacuase it was not generating planeXYZ for few epochs. (One of the reason is dataset of scannet is large and downloading it and structuring even for 5 files will take more time. Also because of depth error in some images its skipping those images)
C. Proper method of annotating and constructing custom dataset - (working) it was observered that plane info,segmentation,depth images is a important while training . So need to find methods to properly annotate and construct the info in accordance with the parameters in planercnn
method to genrate clean depth images similar to scannet or redwoods dataset - (pending)
it seems that we need a stero camera to genrate such perfect point cloud depth map. so i have took from redwood indoor reconstruction dataset Able to Genrate custom segmentations without point cloud with this reprository - https://github.com/chaowang15/RGBDPlaneDetection plane info is given by the repo
D. Custom images Training from checkpoint - (pending)
E. Custom images Training from Scratch - (working)
-
Converting rgbd to point cloud - install open3d - http://www.open3d.org/docs/latest/tutorial/Basic/rgbd_image.html - its really a beutifull one, 2d will be converted to 3d Wow! dataset - http://redwood-data.org/indoor/dataset.html
- Change dataloader for custom dataset - not completed - 27/11/2020 - working for from scratch training.
- Giving high loss as i have removed plane_info and anchors, need to replace properly and reduce the loss - 29/11/2020
- Need to fetch seperate planes for each image - 28/11/2020
Copyright (c) 2018 NVIDIA Corp. All Rights Reserved. This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License.
If you have any questions, please contact Ajith (mailto:inocajith21.5@gmail.com)
- The nms/roialign from the Mask R-CNN implementation from pytorch-mask-rcnn, which is licensed under MIT License
- The School of AI(https://theschoolof.ai/) and EVA5 students