Requirement for capturing photos of calibration board
DonovanZhu opened this issue · 4 comments
System information (version)
- Operating System / Platform => Ubuntu 20.04
- OpenCV => 4.5.5
- Ceres => 2.1.0
- Boost => 1.71.0.0
- C++ => 17
Vision system
- Number of cameras => 2
- Types of cameras => perspective (perspective, fisheye, hybrid)
- Multicamera configurations => non-overlapping (overlapping, non-overlapping, converging)
- Configuration file => .yml (i.e.
*.yml
) - Image sequences => around 100
- number of images per camera
- if relevant and possible, please share image sequences
Describe the issue / bug
I want to calibrate two d435 cameras with their infrared images. These two cameras are non-overlapping. Two Charuco boards are used.
In experiments, I found that the image capture method has a large impact on the output. For example, I tried to move the camera linearly in all directions, then rotate it in each direction, and take images in the process. Then the output is a bit better than moving the camera randomly.
My question is: what is a reasonable rule to capture images for camera calibration?
Thank you very much for using MC-Calib and for your question.
Interestingly, the calibration of a non-overlapping pair of D435 is a scenario we have also resolved during our experiments, so you should be able to reach very accurate calibration. How large is your baseline?
It is very true that the input images will have a very strong impact on the final results.
Here are a few notes that might be useful:
- First of all, you should keep in mind that for non-overlapping systems, a few degenerated cases do exist; thus, you should move your vision system in all possible directions. If your system stays on the same plane or has pure rotation around an axis, then the calibration will be wrong. So the diversity of viewpoints is very important.
- Another critical point is synchronization. Have you verified that your cameras are well synchronized? How do you synchronize your cameras?
- In our tests, we captured videos containing much more images to ensure a good calibration (a few thousand). So quantity and diversity could be very important
- The quality of images is also very important; using the infrared cameras of d435, we faced many problems with the auto-exposure, leading to strong motion blur and wrong synchronization. Therefore, we forced these cameras to have a fast shutter speed leading to relatively dark images, which were enough for calibration. Also, if you also consider the RGB cameras, these cameras suffer from strong rolling shutters, so I would recommend not calibrating these cameras for the non-overlapping camera process.
May I also ask what sort of results are you obtaining? What is the projection error? Could you also share your images to me such that I can have a deeper look?
Thank you very much for using MC-Calib and for your question. Interestingly, the calibration of a non-overlapping pair of D435 is a scenario we have also resolved during our experiments, so you should be able to reach very accurate calibration. How large is your baseline?
It is very true that the input images will have a very strong impact on the final results. Here are a few notes that might be useful:
- First of all, you should keep in mind that for non-overlapping systems, a few degenerated cases do exist; thus, you should move your vision system in all possible directions. If your system stays on the same plane or has pure rotation around an axis, then the calibration will be wrong. So the diversity of viewpoints is very important.
- Another critical point is synchronization. Have you verified that your cameras are well synchronized? How do you synchronize your cameras?
- In our tests, we captured videos containing much more images to ensure a good calibration (a few thousand). So quantity and diversity could be very important
- The quality of images is also very important; using the infrared cameras of d435, we faced many problems with the auto-exposure, leading to strong motion blur and wrong synchronization. Therefore, we forced these cameras to have a fast shutter speed leading to relatively dark images, which were enough for calibration. Also, if you also consider the RGB cameras, these cameras suffer from strong rolling shutters, so I would recommend not calibrating these cameras for the non-overlapping camera process.
May I also ask what sort of results are you obtaining? What is the projection error? Could you also share your images to me such that I can have a deeper look?
Thank you so much for your reply! I mounted two d435 cameras on a module like this:
The distance between the rgb cameras of each d435 is about 80mm. I want to merge the pointcloud of these two d435, so I need to know the exact transformation matrix between them. For some of the points you just mentioned:
- The degenerated cases might be the reason why I got a bad output. I always move the model in a same line or rotate it around a axis. I will try to add more diversity of viewpoints.
- Unfortunately, hardware synchronization is not used in my calibration process. But I put the camera module on a static tripod, then take a picture. Then I change the position of the tripod and take another picture. I am not sure whether it works or not. I will try to use a hardware sync cable later.
- I only use about 70 images for calibration. Could this be another problem? I will try to use a few thousand photos after I apply synchronization.
- I have to admit I did not consider the auto-exposure problem. I will try a a fast shutter speed later.
Since I want to merge the pointcloud of these two d435, I need a precise extrinsic parameter. Here is the reprojection error file:
reprojection_error_data.txt
The photos I captured are here. The square size of all boards is 100mm.
https://drive.google.com/file/d/1DRe1H1khzznrXua5VnTT6oNymSdrBq4o/view?usp=sharing
Thank you very much for the feedback! I will check your images a bit later.
If you use a tripod, then the motion blur and synchronization are definitely not a problem.
However, if the elevation is always the same, I am sure such kind of degenerated linear or planar motion is causing the problem. Also, if you are able to take more diverse viewpoints, 70 images might be enough!
Please keep me in touch on your progress.
Thanks a lot for your suggestion! I will update the comment once I have some progress.