BerkeleyAutomation/perception

running Kinect2Sensor and kinect_bridge simultaneously

sjhansen3 opened this issue · 2 comments

I am currently using the perception package for the kinect sensor with the Kinect2Sensor class.

What is the recommended way to visualize and publish sensor data using the perception package?

I am also using the kinect_bridge: https://github.com/code-iai/iai_kinect2. I use this published image data from the Kinect for two purposes:

  1. Visualizing point clouds and image data in rviz (nice to have for debug)
  2. For use with AR tracking, I use this for calibration and debug.

The problem: I believe I cannot simultaneously run the kinect bridge and connect to the kinect through perception (Kinect2Sensor) because they fight for use of the protonect driver (https://github.com/OpenKinect/libfreenect2).

this leaves me with 3 options

  1. Stop using the kinect_bridge and publish the sensor data (image, point_cloud, etc) on my own through a node I create using perception - has this already been built? Would you welcome a PR on this?
  2. Stop using perception and just use the kinect bridge.
  3. Get kinect_bridge and perception to work together - perhaps I have not configured things properly and there is a way for perception and kinect to access the kinect simultaneously. Have you done this before with the kinect or other sensors?

Thanks!

Sorry about this issue but it makes a ton of sense. At the time we built this class the Kinect2 ROS drivers didn't meet our requirements - hence the direct connection to the protonect driver.

The best way to fix this is to only use the protonect driver through the kinect_bridge and to make a new sensor class in perception if you'd like to use the sensor with the existing interfaces in the dex-net and gqcnn classes. See our ensenso driver for a simple example of how to make a wrapper class by subscribing to the camera topics. We'd definitely welcome a PR on this.

Assuming fixed per #7