We propose a collection of robot-world/hand-eye calibration methods to establish a geometrical relationship between robot, camera, and its environment. We examine the calibration problem from two alternative geometrical interpretations, namely hand-eye and robot-world-hand-eye as shown in the figure. The study analyses the effects of approaching the objective as pose error and reprojection error minimization problem.
Figure: Formulations relating geometrical transformation for calibration (a) Hand-Eye Calibration (b) Robot-World-Hand-Eye Calibration
The datasets are provided as a part of Centre for Immersive Visual Technologies (CIVIT) initiative to provide open-access data.
Please cite the following if you use the dataset and/or the code.
@article{ali2019methods,
title={Methods for Simultaneous Robot-World-Hand--Eye Calibration: A Comparative Study},
author={Ali, Ihtisham and Suominen, Olli and Gotchev, Atanas and Morales, Emilio Ruiz},
journal={Sensors},
volume={19},
number={12},
pages={2837},
year={2019},
doi= {https://doi.org/10.3390/s19122837},
publisher={Multidisciplinary Digital Publishing Institute}
}
The code is dependent on Matlab and some of its toolkits such as Computer Vision Toolbox.
Simply run the main.m file to reproduce the results provided in Table.3 of the publication. The code by default runs the kuka_2 dataset. To use a different dataset, specify the name of the dataset you want to use in the main.m file e.g. DatasetName=’CS_synthetic_3’. The relevant info (square size, Ground Truth if available, poses etc) will be loaded automatically. In order to use your own dataset, follow the pattern of data provided and the name of your dataset in the main file. Upon the request of square size, give it in meters.
Ihtisham Ali
Tampere University, Finland