airo-ugent/airo-mono

Nice to haves for MultiprocessRGBRerunLogger

m-decoster opened this issue · 4 comments

Does the below fit within the camera toolkit or is this considered feature creep?

Describe the feature you'd like

There's a couple of features that would be useful to have in MultiprocessRGBRerunLogger and maybe also the RGB-D version.

  1. Allow for a configurable entity path to which images are logged. Useful for when camera parameters are logged (rr.Pinhole, rr.Transform3D) to show the camera output in 3D view
  2. Add an option to also log camera intrinsics and/or extrinsics in the process, so that these are updated in Rerun when the camera moves.

Use cases

  1. Anyone using the multiprocess rerun loggers
  2. Anyone using wrist-mounted cameras

@Victorlouisdg any thoughts? (since you wrote the multiprocess code)

  1. Configurable rerun entity paths: good idea, gives you more freedom to log additional stuff, feel free to add
  2. Logging intrinsics: would be nice and should be easy to add as MultiprocessRGBReceiver already reads the intrinsics matrix from shared memory.
  3. Extrinsics: I believe it makes more sense to log extrinsics to rerun directly from the main process. The main reasons we do the image logging in a separate process is for computational efficiency and to have a non-blocking video feed e.g. while the robots does a 5 second long move_to_tcp_pose(). For extrinsics the situation is different, as we can't (easily) retrieve the extrinsics/tcp pose when the robot is moving, and it's not much data so the performance impact of logging in the main process is much smaller.
  4. Point clouds: this is extra, but it would make sense to have an option in the MultiprocessRGBDRerunLogger to log point clouds, as this is very expensive to include in the main process. Point clouds are not passed to MultiprocessRGBDReceiver over shared memory yet, because it's quite a lot of additional data, but could consider using use open3d's create_from_rgbd_image() to reconstruct the point cloud from the already available RGB image and depth map.

@Victorlouisdg thanks for your input.

For extrinsics the situation is different, as we can't (easily) retrieve the extrinsics/tcp pose when the robot is moving, and it's not much data so the performance impact of logging in the main process is much smaller.

That is a good point. Since we would be logging the camera position in the main process anyway (i.e., not update the current MultiprocessRGB[D]RerunLogger), I think it would also make more sense to leave intrinsics logging in the main process. Especially since this value typically remains constant.

could consider using use open3d's create_from_rgbd_image() to reconstruct the point cloud from the already available RGB image and depth map

I will have a look at how this could be implemented efficiently.

In summary, I will implement point 1 (entity paths) point 4 (point clouds).

Update: I will do this once #126 and #130 are merged. Point 4 will already be implemented by #126.

I believe this issue is fixed with #126, #130 and #136