graspnet/anygrasp_sdk

translation data seems to be wrong

LunaceC opened this issue · 3 comments

Hi! While the model generates satisfying grasp in open3d visualization, the translation term in the grasp data i got always seems to be wrong.

For instance, the model just generated [0.03345555 0.16258606 0.88210291] as the translation term of the highest score grasp. However as i measured(with a ruler), the grasp point should actually be at about [-0.255, -0.074, 0.685], which is definitely not caused by measurement error lol

However, the rotation matrix it generated is actually quite accurate, which some how tells that my image input seems to work.

Screenshot from 2023-11-23 21-03-17

Screenshot from 2023-11-23 21-02-55

While in open3d the grasp seems to adequately located, which really confused me

Do you have any idea on this? Any help would be appreciated!

Okay here's more weird stuff after more tests:

So after changing different test cases, I found that the grasp data generated for 640480 sized images were fully correct, while all 1280720 pixels images failed on the translation part. The rotation matrices are good for all test cases

Only added to my confusion but hope it helps with locating the problem XD

Did you change the intrinsic matrix accordingly when you switched the image size?

you're right!!! I've put the intrinsic adjustment in the hand eye calibration, so in my main loop it did not activate as I change the image size. Now the project works perfectly

Thanks a ton for the fast response!!