IntelligentRoboticsLabs/gb_visual_detection_3d

Use realsense d435

SeungRyeol opened this issue · 65 comments

I am using RealSense d435. So I changed code of the "darknet_3d.yaml" file as follows.

darknet_ros_topic: /darknet_ros/bounding_boxes
output_bbx3d_topic: /darknet_ros_3d/bounding_boxes
point_cloud_topic: /camera/depth/color/points
working_frame: camera_link
mininum_detection_thereshold: 0.3
interested_classes: ["person", "chair"]

Here is launch file.

<launch>

  <!-- Config camera image topic  -->
  <arg name="camera_rgb_topic" default="/camera/color/image_raw" />

  <!-- Console launch prefix -->
  <arg name="launch_prefix" default=""/>

  <!-- Config and weights folder. -->
  <arg name="yolo_weights_path"          default="$(find darknet_ros)/yolo_network_config/weights"/>
  <arg name="yolo_config_path"           default="$(find darknet_ros)/yolo_network_config/cfg"/>

  <!-- ROS and network parameter files -->
  <arg name="ros_param_file"             default="$(find darknet_ros)/config/ros.yaml"/>
  <arg name="network_param_file"         default="$(find darknet_ros)/config/yolov2.yaml"/>

  <!-- Load parameters -->
  <rosparam command="load" ns="darknet_ros" file="$(arg network_param_file)"/>
  <rosparam command="load" file="$(find darknet_ros)/config/ros.yaml"/>
  <param name="darknet_ros/subscribers/camera_reading/topic" type="string" value="$(arg camera_rgb_topic)" />

  <!-- Start darknet and ros wrapper -->
  <node pkg="darknet_ros" type="darknet_ros" name="darknet_ros" output="screen" launch-prefix="$(arg launch_prefix)">

    <param name="weights_path"          value="$(arg yolo_weights_path)" />
    <param name="config_path"           value="$(arg yolo_config_path)" />
  </node>

  <!-- Start darknet ros 3d -->
  <node pkg="darknet_ros_3d" type="darknet3d_node" name="darknet_3d" output="screen">
    <rosparam command="load" file="$(find darknet_ros_3d)/config/darknet_3d.yaml" />
  </node>
</launch>

This is the node graph(Nodes/Topics (all)).
node1
node2

darknet works fine, but the /darknet_ros_3d/bounding_boxes topic does not have any information.

How can I solve this problem?

file

I don't see the problem with the information provided. rqt_graph is ok and darknet_3d.yaml too. Have you changed something of darknet_ros_3d besides the yaml file?

No. I only modified the yaml config and launch files.
topic
I do not know what is the problem.

I found the following message in the middle of the terminal:

person: 60%
person: 57%
person: 32%
person: 57%
person: 32%
Failed to find match for field 'rgb'.
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 65051840) >= this->size() (which is 271048)
person: 61%
person: 62%
person: 62%
person: 59%
person: 31%
person: 59%
person: 31%
person: 64%
person: 64%
[darknet_3d-4] process has died [pid 30971, exit code -6, cmd /home/shin/catkin_ws/devel/lib/darknet_ros_3d/darknet3d_node __name:=darknet_3d __log:=/home/shin/.ros/log/fcb614b8-2dbb-11ea-8605-4cedfbc5f8af/darknet_3d-4.log].
log file: /home/shin/.ros/log/fcb614b8-2dbb-11ea-8605-4cedfbc5f8af/darknet_3d-4*.log
person: 63%
cup: 34%
person: 57%
person: 57%

My guess is that I need to modify the code of the Darknet3D.cpp file for realsense, but I don't know how to fix it.
How can I solve the problem??

You should not need to change Darknet3D.cpp code. Can you tell me which ros package are you using like realsense driver for see the messages, please?

I've seen at realsense-ros documentation that if you want points cloud be published, you have to run rs_camera.launch with a specific parameter. I don't know if you did this. Probably, this is the mistake. darknet_ros works fine because it doesn't use pointcloud but darknet_ros_3d crashes because it need it.

Hi @SeungRyeol ,

I have not a d435 right now. I have one in my office but until next week I can't access because of the holidays.

Meanwhile, you could check (use rviz), in this order:

  1. Is the camera publishing images? check the topic name and the frame id (visualize It!!)
  2. Is the point cloud being published? check the topic name and the frame id (visualize It!!)
  3. Is darknet_ros publishing the bounding boxes? Check that the topic is /darknet_ros/bounding_boxes and the content with rostopic echo

Check darknet_ros/config/ros.yaml if something went wrong

If everything is ok until this point, the problem is in darknet_ros_3d. Then, check all topics in darknet_ros_3d/config/darknet_3d.yaml. Check also the classes. It should fit with the ones in /darknet_ros/bounding_boxes. Maybe you don't have base_footprint if you are not using a robot. If so, change it for camera_link.

In your conf, I have seen /camera/depth/color/points. Is this topic really correct?

Tell me if you found the error. In any case, I will try next week to reproduce your problem with the same camera.

Best
Francisco

Running the launch file of the RealSense camera with rs_rgbd.launch instead of rs_camera.launch solved the problem. Really thank you. @fmrico @fgonzalezr1998

Hi @SeungRyeol

Yes, I am testing it right now. with rs_camera.launch I am not able to see the point cloud. It seems not to be dept_registered.

I am happy to see that you made it work!!

Enjoy Darknet_ros_3d!!!

Hi all,
i have currently nearly the same problem as SeungRyeol. I´m tring to run darknet_ros_3d with a Realsense D435.
Darknet_ros finds objects and publishes the 2d boundingboxes.
The Realsense publishes the depth_registered pointcloud, so this seems to be the correct pointcloud.
But at least there are no data in the 3D boundingboxes :-(

When I have a look to the bounding_boxes in the rqt topic monitor I´m wondering why the empty 3D bounding_boxes needed the doubled bandwith than the 2D bounding_boxes... is that normal?

I started with ROS two weeks ago, so i´m not very confirm with it...

Can please someone help me finding my problem!? That would be great!

Here are some screenshots of my configuration. If there is something missing, please tell me!

rqt_topic_monitor
rviz

darknet_3d_yaml
ros_yaml
rqt_graph

darknet_ros_3d_launch

roswtf

THANKS A LOT!

Greetings, Hannes

Hi @hannes56a , rqt_graph seems correct. If you are using Intel realsense, make sure that you are launching rs_rgbd.launch. Also, darknet_ros_3d will only detect the objects that are in the objects list specified at darknet_3d.yaml. You can change it to detect a TV Monitor or any other object (make sure that objects names you put in the list are the same that YOLO detects).
But, probably you have some type of problem in nodes communication because there are no TF messages and some warnings are advertising that Darknet ROS nodes are not connected.

One thing you can try is launching a rostopic echo to see if messages are being received correctly by darknet_ros_3d or publish darknet_ros bounding boxes manually using rostopic pub. In this way, you can try out if nodes are communicating with each other correctly.
NOTE: All these trials you must do it with camera driver (rs_rgbd.launch) and darknet_ros_3d.launch running.

Respect to the Bandwith: Darknet ROS's Bounding Boxes are being published to 1 Hz approximately. However, Darknet ROS 3D's Bounding Boxes are published to 10 Hz and they contain more data, so even if I have never seen its bit rate, I think that it is not something abnormal that bit rate is greater.

hi @fgonzalezr1998, you are the man! Your hint with the detection objects was the solution. I always tested the 3d bounding_boxes with the tvmonitor. And this object was only in the ros.yaml and not in the darknet_3d.yaml. Now i added it, and it works fine.

Thank you!

Thanks, @fgonzalezr1998 for hints, they were very useful.

I encountered a strange issue here, the darknet_ros_3d one time works and 10 times doesn't work. Could you help with that?

I'm running the system exactly as you explained and I tried to publish some dummy 3d bounding boxes msg but didn't get a success.

Hello @mhaboali First of all, what I would do is to run Intel RealSense driver (using rs_rgbd.launch) and visualize with rviz if depth_registered_points are seen well
Second: run darknet_ros. When you see that darknet_ros works well and bounding boxes are been received in the proper topic, you can kill it.
Third: Review config/darknet_3d.yaml and sure that all parameters are correct (input topic, that is the darknet_ros output topic; point cloud topic that would be registered like I mentioned before; working frame, that would be camera_link or any other frame whose axes are equals; and minimum_probability, because if the probability is smaller than this value, darknet_ros_3d won't count it).
Fourth: Open launch/darknet_ros_3d.launch and, in the fourth line (config camera image topic) maybe you have to change the RGB camera topic. This is the topic where darknet_ros are going to subscribe. I think that it is /camera/rgb/image_raw for Intel RealSense camera.
Fifth: run launch/darknet_ros_3d.launch without run darknet_ros before. This launcher launches darknet_ros and darknet_ros_3d nodes.

  • Arrived at this point, everything should work properly. If you have any problem in any of these steps, answer telling me what was the last step you checked successfully and what is the step where you had the problem to be able to trace better the mistake.
    However, I will record a video tutorial soon, showing all this configuration and how it works.

Hi @fgonzalezr1998 thanks so many for your clear steps, they were very useful for me. Now it works :D and the issue came when I used darknet_ros_3d_gdb.launch

I have one more question and please let me know if I have to open a new issue for it. How can I track those 3d boxes?

Your help is highly appreciated!

Yes @mhaboali , open a new issue and I will answer you there, please

Hello, @fgonzalezr1998 thanks to your code I could approach my goal some steps.
But I faced some problems.

I'm trying to run darknet_ros_3d with stereo camera(ocams_1cgn) made by withrobot and I really want to get 3d coordinate values like xmin ymin or something.
My camera finds some objects and shows me them with 2d boxes on the yolo window.
Also, it has pointcloud info.

However, there are no 3d boxes and distance info as well.......

I tried to change some code to suit my camera path, but I think it is not perfect.

Could you tell me what I'm wrong? or missing?
I'm a super beginner in ros and computer science.....

I'll upload some pics such as my rqt_graph, ros yalm and .....
3d_node

rqt_graph
topic_monitor
topic_monitor2
darknet_3d_yalm
ros_yalm
ros_3d_launch
darknet3d_cpp

THANK YOU VERY MUCH

-Myoungjin

Hi, @MyongjinKim I've seen that you have two different topics that publish sensor_msgs/PointCloud2. Try using stereo/point_cloud topic.

NOTE: You don't need to change parameters in the code because they are read from the YAML file. You only need to change darknet_3d.yaml and launcher file to change the RGB image topic as you do it.

Also, try to visualize point cloud with rviz. You can launch rviz typing rosrun rviz rviz to make sure it is working properly.

May useful to use the following commands for tracking: rosnode list, rosnode info, rostopic list, rostopic info and rostopic echo.

IMPORTANT!! You must launch darknet_ros_3d using roslaunch darknet_ros_3d.launch, not rosrun because otherwise config file will not used. If you use rosrun you have to use additional argument for passing the YAML file.

I'll be waiting for your feedback! ;)

Thanks for fast review :-)
I tried to change cloud topic to stereo/point_cloud but it didn't work..... : (
And I also tried rostopic echo /darknet_ros_3bounding_boxes then I've got this error messge

  • [ [ERROR] [1594374501.762555340]: "ocams_1cgn" passed to lookupTransform argument target_frame does not exist.
    Failed to find match for field 'x'.
    Failed to find match for field 'y'.
    Failed to find match for field 'z'.
    Failed to find match for field 'rgb'.
    ]

camera_link_error

If have you ever seen the above message ???
pointcloud

node_graph

I think the problem is that /darknet_3d can't get '/tf' info so it can't calculate the distance....
If you know any good idea please let me know.

Thanks a lot.

Myoungjin

@MyongjinKim exactly! It means that there is not tf between both frames. Without tf between point cloud frame and working frame, darknet_ros_3d can not calculate 3d coordinates.

you can see all available frames adding TF in rviz. If there is some TF whose axes are the same that camera_link (X-axis aiming to front, Y-axis aiming to left, and Z-axis aiming to top) you can use it changing the working frame in darknet_3d.yaml. In this case, the 3d coordinates will be calculated concerning this new frame.

However, if there is not any TF whose axes have these characteristics, you can publish by yourself the transform between two frames using static_transform_publisher where you indicate the translation and rotation that exists between both frames.

I recommend you that if you are going to use this camera more times, yo do you custom package with a launcher that includes static_transform_publisher and darknet_ros_3d launcher.

If you have any other hesitate, no doubt in ask it

Oh, finally I can catch the distance value!!!!
I did rostopic echo /tf_ and I got a frame_id : "base_link"
And I changed yaml file.
Thank you so much @fgonzalezr1998

Lastly, may I ask you something?
I got these values but I can't understand what exactly they mean.
distance

Readme

It is difficult to understand them because they are not clearly described on your 'Read.me' page.
Could you explain them in more detail?

THANKS A LOT

-Myoungjin

darknet_ros_3d takes as input the 2d bounding boxes and returns 3d bounding boxes. It means that each detected object will have an imaginary box around. Xmin is the nearest X coordinate and Xmax the farthest X coordinate. The same with Y and Z.
So, you don't have (x, y, z) coordinates. You will have (x, y, z) maximum and (x, y, z) minimum. I represent that in the following way: (xmin, xmax, ymin, ymax, zmin, zmax). This is like that because these coordinates are not 3d coordinates of only one point but they are 3d coordinates of the volume around the object.

For a better understanding, you can visualize these 3d bounding boxes adding MarkerArray on rviz and subscribing to darknet_ros_3d/markers topic.

When you look output 3d bounding boxes what you can see is an objects array where each object has its 3d coordinates, its class name, and its probability of certainty.

I hope my explanation was useful @MyongjinKim

Thank you for your detailed explanation.
Actually, I'd like to use my own weight and cfg files, so I put them in the cfg and weight folder, modified some files like darknet_ros_3d.launch or something but it didn't work. and I got below error message.

[ INFO] [1594611813.531696317]: [YoloObjectDetector] Node started. [ INFO] [1594611813.535463653]: [YoloObjectDetector] Xserver is running. [ INFO] [1594611813.536719577]: [YoloObjectDetector] init(). YOLO V3 First section must be [net] or [network]: Resource temporarily unavailable [darknet_ros-1] process has died [pid 18872, exit code 255, cmd /home/hotshot/catkin_ws/devel/lib/darknet_ros/darknet_ros __name:=darknet_ros __log:=/home/hotshot/.ros/log/f8c0a4b8-c4ba-11ea-9585-080027300d94/darknet_ros-1.log]. log file: /home/hotshot/.ros/log/f8c0a4b8-c4ba-11ea-9585-080027300d94/darknet_ros-1*.log

last1
last2
last3

Could you tell me what I should fix???

I'm sorry to keep asking you about the little things. Thank you so much. @fgonzalezr1998

-Myoungjin

If you want to use other weights, you can change it between all weights you have available in darknet ROS Github page or you can re-train the neural network. This is something that I did some time ago and I recommend you follow these steps. @MyongjinKim

Hi @fgonzalezr1998 it's Myoungjin
Finally I combined my weight file with darknet_ros_3d. The problem was cfg file.
So now I can get these 3d box's information, but I am still wondering how long the object is far from my stereo camera. In order to calculate linear distance from the base, do I calculate sqrt((xmin+xmax)/2+(ymin+ymax)/2+(zmin+zmax)/2) ??
question
question2
And do you think does it look right?

Thanks a lot

Myoungjin

First of all, 3D bounding boxes provide min and max coordinate in each axis. So, if yo want to get the Euclidean distance between the camera and the detected object, it is a good idea to take the center of the object.

In this way, the coordinates of the point would be X=(Xmin+Xmax)/2.0, Y=(Ymin+Ymax)/2.0, and Z=(Zmin+Zmax)/2.0. Once calculated this point, you know that the object distance vector is V = (X, Y, Z). So, you anly need to calculate the module of this vector: D = sqrt(X^2 + Y^2 + Z^2).

On the other hand... if you anly want to know how far is located the object in X axis, you anly have to do: D = (Xmin+Xmax) / 2.0

I want to make clear that 3D bounding boxes provides to you the min and max distances in the three coordinates where object is located. with this information, you know all you could need to know because you not only know the Euclidean distance. Youcan know the volume of the object and its position in the 3d space

@MyongjinKim I hope I solved your doubts

Hello @fgonzalezr1998 it's been a long time
I just saw your readme file and I wanna ask you something
Should I install ROS2 to use your package??? or is there any way to use it by using ROS melodic??
Regards
Myoungjin :))

Hi @MyongjinKim we have 3 branches. melodic, ros2_eloquent and master. The last one contains the same that ros2_eloquent. So, if you use ROS2 (Eloquent), you must clone the ros2_eloquent branch; and if you use ROS1 (Melodic), you must clone the 'melodic' branch.

@fgonzalezr1998 Oh I c thanks a lot
And could I ask one more thing about ROS ?? I really eager to use the calculated x,y,z, values using ur package and I'd like to show its center values on my terminal also I wanna send it different code as a string or float value.
Can I make it ?? If yes could you give me some tips???
Super Regards
Myoungjin Kim

Hello @MyongjinKim Here you have a trial package that makes exactly what you want. It only have one very simple node. Take a look to this package, please, and use it for watching how it works. Run darknet_ros_3d and when it is working, run the node of this package. If you have doubts, ask me ;)

@fgonzalezr1998 Oh my gooodneessss !!!!
Thanks to you I could understand how I should write a node a little. Really thank you so much @fgonzalezr1998 And if okay can I ask one more?? I've tried run darknet_ros_3d_trial package by

rosrun darknet_ros_3d_trial darknet3d_trial_node
or

rosrun darknet_ros_3d_trial darknet3d_trial_node.cpp

But I've got below error message

me@me-desktop:/catkin_ws$ rosrun darknet_ros_3d_trial darknet3d_trial_node
[rosrun] Couldn't find executable named darknet3d_trial_node below /home/me/catkin_ws/src/darknet_ros_3d_trial
me@me-desktop:
/catkin_ws$ rosrun darknet_ros_3d_trial darknet3d_trial_node.cpp
[rosrun] Couldn't find executable named darknet3d_trial_node.cpp below /home/me/catkin_ws/src/darknet_ros_3d_trial
[rosrun] Found the following, but they're either not files,
[rosrun] or not executable:
[rosrun] /home/me/catkin_ws/src/darknet_ros_3d_trial/src/darknet3d_trial_node.cpp

I had run my camera launch file and a darkner_ros_3d.launch file before input rosrun
Could you tell me what should I do????

Regards
Myoungjin

@MyongjinKim Excuse me, there was a bug in CMakeLists.txt. I have fixed the bug, so update it and now it should be work: rosrun darknet_ros_3d_trial darknet3d_trial_node.
Concerning the code, it is not the only way to do it. You don't have to use classes but it is the best way to write a node

@fgonzalezr1998 Amazing !! Thanks a lot!!!! I'm really express my gratitude to you.
and have you ever controlled a motor in the ROS system on a Xavier NX board ??

I have used ROS in an Nvidia Jetson Nano @MyongjinKim . I think it's similar. I was working in a RoboCup Challenges with a Turtlebot 2 using the Jetson.

@fgonzalezr1998 Og sounds great!! How was it?? And if you fine could I refer to your code ??????
Regard
Myoungjin KIm

Hi all,
in the past I was able to let this project run in ROS1 with an intel realsense D435. Now I´m trying the same in ROS2.
In ROS1 I used the "rs_rgbd.launch" file to get the correct 3d data of the D435. But for ROS2 I doesn´t find such a launch file. All trys to activate the "depth_registered/points" doesn´t work. So I´m not able to get the 3d boundingboxes...

Can someone help me? Perhaps someone has a working *.launch.py for me?

Hi, @hannes56a I use this driver and I get point cloud information from /camera/pointcloud topic. It works well for me. here you can see a usage demo. Ask me what you need ;)

For me, it is so well that you use darknet_ros under ROS2 because in this way I can improve it. Under ROS1 I have had feedback but not under ROS2. Thank you.

@MyongjinKim Repository where I have all the code I did is private but I have invited you for you can see it. Probably it not compile but is normal because it depends on Dialogflow and it is using an old version of darknet_ros_3d. That happens because when the pandemic began I stopped its development. However, it can serve you as a guide. Jetson was running the ROS master, kobuki_node (k0buki driver), navigation packages (amcl, move_base, etc etc), and Darknet ROS neural network. And we were monitoring all data with my laptop using rviz. If you want to see a video, tell me and I can upload it to youtube and give you the url.

@fgonzalezr1998
Yeah, now it works... There where a couple of failures i think. But with your information "it have to work with /camera/pointcloud" I was able to find the other failure...
Thanks a lot for this helpfull hint!

@fgonzalezr1998 Yheaaaaaaa! I'm really wanna watch it!!

@fgonzalezr1998 Yheaaaaaaa! I'm really wanna watch it!!

@fgonzalezr1998 Me too!! ^_^ I am also doing a project with Jetson Nano and Realsense D435. Could you also make me accessful to your private repository? Thanks so much!!

@MyongjinKim @jameslele Here you can see the Turtlebot2 navigating and detecting objects with the neural network using the Nvidia Jetson nano which is mounted over the Turtlebot. It uses ROS Melodic.

The only difference is that Intel RealSense was not used. It used the Orbbec Astra camera. But if you would want to use the Realsense camera you only have to install its driver in the Jetson and launch it instead of Orbbec.

  • NOTE: You both should have an invitation to the repository where I have all the code. Some things changed on darknet_ros_3d since I did all these codes so it now will not compile but it is a good guide. Especially for watching the structure. I use a tool to create states machines named BICA developed by @fmrico

@MyongjinKim @jameslele Here you can see the Turtlebot2 navigating and detecting objects with the neural network using the Nvidia Jetson nano which is mounted over the Turtlebot. It uses ROS Melodic.

The only difference is that Intel RealSense was not used. It used the Orbbec Astra camera. But if you would want to use the Realsense camera you only have to install its driver in the Jetson and launch it instead of Orbbec.

  • NOTE: You both should have an invitation to the repository where I have all the code. Some things changed on darknet_ros_3d since I did all these codes so it now will not compile but it is a good guide. Especially for watching the structure. I use a tool to create states machines named BICA developed by @fmrico

@fgonzalezr1998 Hi, thanks very for your reply. It helps me a lot. I am so sorry so long not to reply. I still have not gotten the invitation yet. Could your send me again?

Sorry, @jameslele that is the invitation

Oh no, it has expired.(。•́︿•̀。)

@fgonzalezr1998 @jameslele Hi all. I hope you guys are doing well
And I'd like to ask you about my issue. I've been trying to send 3d coordinate from gb_visual_detection_3d package to arduino as a float value. so I tried to make some codes but I've failed....... TT
If you okay could you give me some tips?
I wanna publish a node that contains center distances info and subscribe it to my Arduino using rosserial package ....
Regards
Myoungjin

@jameslele New invitation

@fgonzalezr1998 I have already accepted. Thanks so much!!

I need to detect custom objects using darknet and want to get 3d coordinates.

I retrained the model and obtained the weights.

Can someone please guide me regarding the next steps @MyongjinKim ?

Thanks for the help.

Hi @hsaleem1 If you have darknet_ros working succesfully with your own weights, you only have to define what objects you want darknet_ros_3d to detect editing the config/darknet3d.yaml and then launch it.

You must configure propperly the topics of teh camera rgb image, pointcloud, etc

@fgonzalezr1998, thanks for your help. Its done now.

@fgonzalezr1998 thanks for your code.
But, I faced some problem. Although I checked all, but i couldn't get distance.

First, I checked below

  1. Darknet_ros publish boundingboxes well
  2. Pointcloud on rviz well
  3. darknet_rd.yaml
  4. Run rgbd.launch, darknet_ros_3d.launch

After that, I faced problem below.
123
same with @MyongjinKim

but, i cannot get /tf from rostopic echo /tf
I can get tf from /tf_stataic but, there are lots of frame_id, i already changed darknet_3d.yaml, but not worked.

Below is my rqt_graph
456

@fgonzalezr1998 Sorry,sir
I found my fault in darknet3d.yaml
I had to change to /camera/depth_registered/points, not /camera/depth/color/points
Thank you

@fgonzalezr1998
Hi!
I wonder meaning of Zmin, Zmax
I saw your answer above, but i cannot understand why Zmin or Ymin become minus

plus, if camera check the person, then Zmax means distance between back and camera?

I saw distance in Z coordinate is (Zmax+Zmin)/2, but real distance is similar with (Zmax-Zmin) in my case.
I think i'm wrong, but can you explain about it? :'(

@Jaelly X axis aims to the front; Y aims to the right and Z aims to up. So, Z can be minus if camera es placed higher than the object. On the other hand, Y can be minus if the camera is placed in the right side of the object.
Thus, Z doesn't provide the distance between the camera and the object but X axis.

About the distance the object is placed (from the camera):

Xmin represents the nearest point.
Xmax represents the farthest point.
(Xmin + Xmax) / 2 represents the distance from camera to the middle point of the object (you are doing the mean).

@fgonzalezr1998
thank you :)
Plus, I have some problem with fps.
For checking distance between camera and object, I use rs_rgbd.launch.
But on Xavier Jetson NX, there are really low fps even that was compressed image.
If i use rs_camera(without depth), then fps is little bit increased.

Are there any tip to enhance fps with Xavier NX+ Realsense camera using rs_rgbd?

I set Xavier NX, and other PC with same Networks. And I use Image_transport to subscribe compressed image on darknet_ros_3d.

Hi @Jaelly Perhaps, the problem is that Darknet ROS is not using your GPU. In this case, maybe the problem is related to CUDA.
Please, check the fps you receive the bounding boxes provided by darknet_ros (not darknet_ros_3d).

2 cases may happen:

  • The FPS of the 2D bounding boxes (provided by darknet_ros) are low: In this case, you hava to make some changes in your cuda installation, or modify the LD_LIBRARY_PATH environment variable in order to find the CUDA dynamic linker at compilation time. You can see the issues in the repository of darknet_ros.

  • The FPS are high. In this case, the problem is not related to darknet_ros. It would mean that the data is sent slowly by the network. perhaps, due to a limited bandwidth or something like that

Please, carry out this checks and report here the results

I think that is because of case2.
I just run realsense2-camera on NX , and watch PC on rqt without darknet_ros, but that was slow.

So, sending image frame is already slow.

Is there any tip to upgrade fps without changing NX board? :'(

Are you gathering the PC inside the NX or in other PC through wifi or ethernet connection? If you are getting the PC in other computer, you can't avoid the bandwidth limitations. However, if you are not sending the PC by the network... I don't know what is happenning.... Try to run the ```rostopic hz`` command inside the NX (by ssh, for example) because if you are running rqt in the other computer to which NX is connected, it's normal that you see slow FPS's @Jaelly

@fmrico @fgonzalezr1998 @jginesclavero
Hello, I am using object detection in ROS1 Noetic. I’m using Intel Realsense D415 depth sensing camera.
I’m trying to use the Darknet_ros_3d package to get 3d bounding box coordinates of the object detected in the frame. I’ve read through all the documentation of the package and tried everything mentioned here but I’m not able to get the depth coordinate. The pointcloud data is published on camera/depth/color/points topic.
The darknet package gives x, y coordinates but when I do the rostopic echo /darknet_ros_3d/bounding_boxes -n1 there are no coordinates…

this is the output

header:
seq: 0
stamp:
secs: 0
nsecs: 0
frame_id: “camera_rgb_optical_frame”
bounding_boxes: 

Only issue I can think of after all the debugging is that the frame_id”: “camera_rgb_optical_frame is wrong. I do not find any frame by this name in the rviz and rqt_graph.
I even changed the “working frame” in darknet_3d.yaml file to camera_link but the frame_id is still the same. What is the correct value for the frame_id? Am I missing something?

Thanks!

I'm using ROS noetic, I git clone this repo to my src folder of my catkin workspace, but when I want to compile it using catkin_make, it shows error :

first error :
image

then I install the missing file (I don't know if the missing file should be there and I shouldn't install anything) , then it shows another similar error :
image
but for this one, I can't even install using sudo apt-get install gb-visual-detection-3d-msgs .

Thanks in advance.

Hi @TapleFlib you have to clone the repositories that contains darknet_ros_msgs and gb_visual_detection_3d_msgs into your workspace. They are in different GitHub repos.
I recommend you to clean the "build" and "devel" directories before to recompiling. If the problem persists, try to compile with catkin_make -j1

Thankyou !

@fgonzalezr1998 hello , do you know where can I find the example code of adding detected object as tf and send it to robotic arm ? https://github.com/introlab/find-object/blob/master/src/ros/tf_example_node.cpp this file is not available.