Cartucho/vision_blender

Ground Truth Pose and Translation verification

kaali-billi opened this issue · 1 comments

I have edited your code to provide the object_pose in Quaternions, the pose generated in pose of object as seen from the camera.
My error is in translation. Even after making changes to the algorithm for verfication using Open3D, I have slight error in translation.
I am attaching the changes made, please let me know if you can help me out. (error in translation <5m in all x,y,z)

Quaternion Pose generation :

def quat_obj_poses():
cam = bpy.data.objects['Camera']
n_chars = get_largest_object_name_length()
n_object = len(bpy.data.objects)
conv_mat = mathutils.Matrix([[ 1, 0, 0, 0],[ 0, 0, -1, 0],[ 0, 1, 0, 0], [ 0, 0, 0, 1]])
obj_poses_quat = np.zeros(n_object, dtype=[('name', 'U{}'.format(n_chars)), ('pose', np.float64, (4))])
for ind, obj in enumerate(bpy.data.objects):
pose = (conv_mat @ cam.matrix_world.inverted() @ obj.matrix_world).to_quaternion()
obj_poses_quat[ind] = (obj.name, pose)
return obj_poses_quat

Translation/ Location generation :

def get_translation():
cam = bpy.data.objects['Camera']
n_chars = get_largest_object_name_length()
n_object = len(bpy.data.objects)
conv_mat = mathutils.Matrix([[ 1, 0, 0, 0],[ 0, 0, -1, 0],[ 0, 1, 0, 0], [ 0, 0, 0, 1]])
translation = np.zeros(n_object, dtype=[('name', 'U{}'.format(n_chars)), ('location', np.float64, (3))])
for ind, obj in enumerate(bpy.data.objects):
tmp = conv_mat @ cam.matrix_world.inverted() @ obj.matrix_world
loc = tmp.to_translation()
t1 = loc[1]
loc[1] = loc[2]
loc[2] = -t1
translation[ind] = (obj.name, loc)
return translation

ere

Let me know if this can be corrected

Blue Point Cloud : Observed Point cloud made from depth map
Green Point Cloud : Ground truth generation using extracted pose and translation