ensenso/ros_driver

Out put depth image

Closed this issue · 10 comments

I want to out put the depth image. So I receive the image from the topic: /depth/image
Then I use the CvBridge() to transform the msg form the topic in to the .png images.
It is worked in those codes:

depth = bridge.imgmsg_to_cv2(depth_msg, "32FC1")
cv2.imwrite(cam1_path + image_name, depth)

But the result is not good. The gray level of the output gray image is only two value: 255 and 0. So can you tell me how to solve this problem?

The depth image contains the depth as float values in meters. In order to save it to an image, you have to scale the values to the usual gray value range of 0 to 255 and decide how you want to represent NaN values (where there is no depth data).

As a start here is some code from the internet, which does the scaling part with cv2.normalize: https://stackoverflow.com/a/47811136

Thank you very much. Besides, do you know how to save 32fc1 data as text?

I hope that the relevant code I designed can directly use the depth information from the camera.

You can extract the individual pixel values as described here and then save them however you want.

But I don't need pixel values. As you said, the 32fc1 data contains distance information. So there is no way to extract distance information directly from 32fc1 data?

The image is converted to depth map, and then the depth information is extracted from the depth map. In this way, the accuracy of information will be reduced.

But I don't need pixel values. As you said, the 32fc1 data contains distance information. So there is no way to extract distance information directly from 32fc1 data?

What do you mean? The information in the 32FC1 image is the depth information. As I said, each pixel is a float value which is the depth in meters.

Before, my problem was how to get and output depth information directly. Your advice to me is to extract a single pixel value from the image.

But the answer given by the link is to get the pixel value from the image converted by OpenCV.

First of all, the format of the information obtained by your code is 32fc1. This kind of information cannot be viewed directly through the image viewing tool. The method I'm using now is to process the 32fc1 information (your suggestion for the previous question). Convert an image to a depth map (gray represents depth).

As you said, the information in the 32fc1 image is depth information. Each pixel is a floating-point value, the depth in meters.

But at this stage, my operation is to convert 32fc1 image into PNG format depth map. In this case, it is not accurate to access the pixel value of PNG image.

So my question is, how to extract depth information from 32fc1 image directly?

Instead of extracting depth information from the PNG image?

If you have the depth image in memory, you can access the float values directly with OpenCV as it is described in the link I wrote.

If you want to save it to disk and load the data later, you have to save it in some way that does not change the values. PNG ist not the correct format to use here, because it does not support float images. You can save the image e.g. in the TIFF format (which does support float images) or in some custom format (e.g. as text or binary blob).

In the Tutorials:
By default, all data produced by the camera node is in the _optical_frame, where denotes the camera serial of the ensenso camera. To show this data in RViz you have to change the fixed frame or provide a transformation between this frame and the map frame.

But when I try to show the pointcloud in the rviz, the Fixed Frame only show map. So I can't show the pointcloud in the rviz. What is the problem

You should be able to simply type the name of the camera's optical frame in RViz. See https://stackoverflow.com/a/52431716

Alternatively you can also

  • Publish a transformation between map and the camera frame using static_transform_publisher.
  • Launch the camera mode with the camera_frame parameter set to map.