ifm/ifm3d-ros

Running multiple O3D

pfrydlewicz opened this issue · 6 comments

When operating an array of O3D, what would be the best way to do it from your opinion?

  1. Using ros remapping
  2. Implementing multiple instances in the ros wrapper

I am not that experienced in ros yet, so grateful for advice.

You would launch an instance of the camera nodelet for each camera that you have. For example, say you have two cameras. Camera 1 is at 192.168.0.69 and Camera 2 is at 192.168.0.70. A reasonable way to deal with this is:

$ roslaunch ifm3d camera.launch nn:=camera1
$ roslaunch ifm3d camera.launch ip:=192.168.0.70 nn:=camera2

A few things to note. On the first launch line, I did not specify an IP address, b/c camera 1 is using the default. Using the approach above, all topics/services for camera1 will be namespaced as /ifm3d/camera1. So, for example, the point cloud of camera 1 can be acquired at /ifm3d/camera1/cloud and for camera 2 at /ifm3d/camera2/cloud. Additionally the coordinate frame transformation will be properly namespaced in the tf tree as well. For example, you will have coordinate frames named camera1_link, camera1_optical_link, camera2_link, camera2_optical_link. Of course, with out a calibration and a common parent, you cannot transform the data from one camera to the other's coordinate frame, but, that is beyond the scope of your question.

Also realize that if you plan to use the pre-configured rviz files provided with this package, you will have to change the topics that the various displays subscribe to as well as set a new fixed frame per the tf frames noted above. If you know a priori how many cameras you plan to use and what you plan to name them, I'd suggest just creating your own rviz config file and wrapping it in a launch file modeled after this one.

Realize that the solution above runs each camera as a separate Unix process. If you are looking to run a low latency pipeline operating on data from both cameras, you should write your own launch file, modeled after this one where you load up two (or more) instances of the camera nodelet into the same nodelet manager. At that point you can consume the data in proc versus out-of-process and not incur all of the overhead of message serialization, transmission down and up the TCP stack, and deserialization.

HTH.

@tpanzarella Thanks a lot.
That's the answer I was looking for. Didn't know that it's that easy by changing the namespace in the launch file. But that's a pretty nice solution.

I'm planning to implement a camera calibration to merge the devices using extrinsic calibration settings for each device.
Probably there will be some point-cloud refinement necessary, at least reconstruction of neighborhood relations, but that's a second step, if necessary at all.

Thanks for your support.

Paul

Hey Paul, Just a word of warning. Please do not use the on-camera extrinsic calibration, you will not get the results you seek. Perform you calibration and publish it into your ROS graph via tf.

Why is that?
What is the weakness of the embedded transformation?

No. It is a transform done on the camera that is WRT the optical frame before ifm3d ever sees it. However, the low-level ifm3d camera driver as it is parsing the bytes from the camera does an explicit transform from the optical frame to the camera frame a priori assuming the optical frame has been left unchanged (I.e., is on the camera as 0,0,0,0,0,0). Basically, if you mess with the on-camera extrinsics, you will likely get confusing (or garbage) output from ifm3d unless you really know what you are doing. I make that last statement because I have exploited this several time to, for example, flip the image when the camera is mounted upside down before ifm3d sees the data so that I can reason about the data “in the normal way” when writing my algorithm or where my algorithm assumed the camera was mounted right-side-up but for mechanical mounting purposes required the camera to be mounted upside down.

Thanks for pointing me to tf. It makes life a lot easier.
A quick test with providing the transformation between world and the ifm3d-ros camera_link frame worked well.

Thank you @tpanzarella for sharing your expertise on that.