gopro/gpmf-parser

Accelerometer data value

zhangy76 opened this issue · 11 comments

May I ask why my accelerometer is larger than +9.81m/s^2 at Z axis and not zero at the other two axes?

Thanks,

By how much and which camera model?

Thank you so much for the quik response! It is HERO7 Black. I put it on a table, and get 10.4, -0.34, -0.34, for Z,X,Y, respectively. The table should be perpentical to the ground and the camera is in normal standing.

Also, I do a simple experiment on checking the coordinate of the acceration data. Basically, I rotate the camera with one axis fixed. The acceleration at the fixed axis also changed. I feell like the data coordinate system does not well align with the camera coordinate system. Is that correct? Or my camera is broken 😂

It is not terribly out, so I expect the HERO7 data is just not calibrated. On my HERO9 I get, 9.815, -0.07, 0.119, which is much closer, so I expect the data is calibrated. The small X-Y errors are likely that my desk it not perfectly level. On my HERO7 I get 10.74, 0.01, 0.21. IMU calibration has improved over the camera models, I expect HERO7 was prior to any in-factory calibration. My HERO8 reads, 9.84, 0.05, 0.02, also seems calibrated. So your camera is not broken, you are just looking at raw data.

I expect the latest models calibrated the accelerometer as is was used for more features (auto horizonal leveling.) In HERO7 and earlier only the gyro was used for on-camera features like stabilization.

Thanks for the detailed explaination. In fact, I would like to get the position from the acceleration. Though the double integration would accumulate errors over time, do you think it's possible to control the error within 10mm for a 1 min video? Also, is there any other possible to improve the estimation?

Also, no matter how the coordinate system is, the still acceleration should not exceed the magnitude of 9.8 right? HERO7 has something like 10.8.

Yufei

You need to do you own calibration on the HERO7, before of orientation and magnitude. Position from acceleration alone (with calibration) would not be precise, you need to fuse with some sensor data. Vision and/or GPS, will be needed.

May I ask how to exploit vision modality?

Also, is there any reference for the calibration process?

All outside of the a scope of this repo. Calibration (stationary camera, average the results, calc and apply rotation matrices, likely) and visual odometry are covered in depth elsewhere (Lots of OpenCV stuff.) This repo is only about retrieving the data stored on the camera. I happen to have experimented with accelerometer to position, and find the issues require image sensor fusion, but that is a where my knowledge ends.

OK. Learned a lot. Thanks!