microsoft/Azure-Kinect-Sensor-SDK

How can I obtain the transformation matrix from the color camera to the depth camera?

TitansWhale opened this issue · 1 comments

I have retrieved 20 points (x, y) from the color camera, and I want to map them to the depth camera. How can I obtain this transformation matrix?
I have looked into the method k4a_calibration_color_2d_to_depth_2d, but it returns nearby non-hole points if the selected point falls within a hole in the depth map. I would like to obtain the correct depth coordinates even for hole points. What should I do?

def get_extrinsics_rgb_to_depth(self):
      t = self.calibration_handle.extrinsics[_k4a.K4A_CALIBRATION_TYPE_COLOR][_k4a.K4A_CALIBRATION_TYPE_DEPTH]
      rot = np.eye(3).reshape(-1)
      for i in range(9):
          rot[i]=t.rotation[i]
      rot = rot.reshape(3,3)
      ret = np.eye(4)
      for i in range(3):
          for j in range(3):
              ret[j][i]=rot[i][j]
          ret[i][3]=t.translation[i]
      return ret

  def colorxy_to_depthxy(self, xy):
      ca = self.get_calibrateion()
      ca.get_matrix()
      k4a_depth_img = self.get_depth_image_object()

      trans = self.get_extrinsics_rgb_to_depth()  # return extrinsics[_k4a.K4A_CALIBRATION_TYPE_COLOR][_k4a.K4A_CALIBRATION_TYPE_DEPTH]
      print(trans)
      xyzw = np.array([xy[1],xy[0],0,1]).reshape(-1)

      rett = trans.dot(xyzw)
      rettt = xyzw.dot(trans)
      source_point2d = _k4a.k4a_float2_t()
      source_point2d.v[0]=xy[0]
      source_point2d.v[1] = xy[1]
      
      k4a_float2 = ca.convert_color_2d_to_depth_2d(source_point2d, depth_image=k4a_depth_img)
      ret = [int(k4a_float2.v[0]),int(k4a_float2.v[1])]
      return ret