Our work proposes a novel framework that addresses the computational limitations associated with training Dense Object Nets (DON) while achieving robust and dense visual object descriptors. DON's descriptors are known for their robustness to viewpoint and configuration changes, but training these requires image pairs with computationally expensive correspondence mapping. This limitation hampers dimensionality and robustness, thereby restricting object generalization. To overcome this, we introduce a data generation procedure using synthetic augmentation and a novel deep-learning architecture that produces denser visual descriptors with reduced computational demands. Notably, our framework eliminates the need for image pair correspondence mapping and showcases its application in a robotic grasping pipeline. Experimental results demonstrate that our approach yields descriptors as robust as those generated by DON.
@inproceedings{navale2023training,
title={Training Dense Object Nets: A Novel Approach},
author={Navale, Kanishk and Gulde, Ralf and Tuscher, Marc and Riedel, Oliver},
booktitle={2023 Fifth International Conference on Transdisciplinary AI (TransAI)},
pages={212--217},
year={2023},
organization={IEEE}
}