dusty-nv/ros_deep_learning

Some doubts about the efficiency when convert ros::

qtw1998 opened this issue · 0 comments

Fristly, thank you for your excellent contribution!!
node_detectnet.cpp (Ln87) used nput_cvt->Convert(input) to decipher the sensor_msgs::ImageConstPtr type.

// input image subscriber callback
void img_callback( const sensor_msgs::ImageConstPtr input )
{
	// convert the image to reside on GPU
	if( !input_cvt || !input_cvt->Convert(input) )
	{
		ROS_INFO("failed to convert %ux%u %s image", input->width, input->height, input->encoding.c_str());
		return;	
	}

And then input_cvt->ImageGPU() as the decoded image which is able to be used in Detect()

template<typename T> int Detect( T* image, uint32_t width, uint32_t height, Detection* detections, uint32_t overlay=OVERLAY_BOX )			{ return Detect((void*)image, width, height, imageFormatFromType<T>(), detections, overlay); }

After identifying your codes & Jetson Inference codes, I got the foregoing thoughts, if any errors hope for your correction!
But to be usual or universal platforms like embedded ARM kits, I usually write python version ROS codes like this:

#!/usr/bin/env python

import rospy
import numpy as np
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import cv2

greenLowerBound = np.array([50, 100, 100])
greenUpperBound = np.array([70, 255, 255])

def image_callback(data):
    bridge = CvBridge()
    src = bridge.imgmsg_to_cv2(data, "bgr8")
...

Now very confused if bridge.imgmsg_to_cv2 is not efficient slow, and costly using CPU source.
If referring to the efficiency, I wanna know if I need to code using C++ just like #include "image_converter.h" & memcpy(mInputCPU, input->data.data() ... to accelerate image decoding from ros msgs type aiming at using OpenCV.

Hope for your help! THX!