- Paper
- Project page
- Another good implementation of this project can be found here with real demos.
@inproceedings{cai2022ove6d,
title={OVE6D: Object Viewpoint Encoding for Depth-based 6D Object Pose Estimation},
author={Cai, Dingding and Heikkil{\"a}, Janne and Rahtu, Esa},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={6803--6813},
year={2022}
}
Please start by installing Miniconda3 with Pyhton3.8 or above.
This project requires the evaluation code from bop_toolkit and sixd_toolkit.
Our evaluation is conducted on three datasets all downloaded from BOP website. All three datasets are stored in the same directory. e.g. Dataspace/lm, Dataspace/lmo, Dataspace/tless
.
Evaluation on the LineMOD and Occluded LineMOD datasets with instance segmentation (Mask-RCNN) network (entire pipeline i.e. instance segmentation + pose estimation)
python LM_RCNN_OVE6D_pipeline.py
for LineMOD.
python LMO_RCNN_OVE6D_pipeline.py
for Occluded LineMOD.
Evaluation on the T-LESS dataset with the provided object segmentation masks (downloaded from Multi-Path Encoder).
python TLESS_eval_sixd17.py
for TLESS.
To train OVE6D, the ShapeNet dataset is required. You need to first pre-process ShapeNet with the provided script in training/preprocess_shapenet.py
, and Blender is required for this task. More details refer to LatentFusion.
Our pre-trained OVE6D weights can be found here. Please download and save to the directory checkpoints/
.
-
- For T-LESS we use the segmentation masks provided by Multi-Path Encoder.
-
- For LineMOD and Occluded LineMOD, we fine-tuned the Mask-RCNN initialized with the weights from Detectron2. The training data can be downloaded from BOP.
-
- The code is partially based on LatentFusion.
-
- The evaluation code is based on bop_toolkit and sixd_toolkit.