mikel-brostrom/boxmot

Question on Color Conversion in DeepOCSort During Tracking

Closed this issue · 1 comments

Search before asking

  • I have searched the Yolo Tracking issues and found no similar bug report.

Question

Hello,

I'm encountering an issue while using track.py for inference, specifically when the run() function calls yolo.track(). If I understand correctly, during the execution, each frame will call trackers for tracking through on_predict_start.

While porting the code to C++, I traced the call path for each frame's image (which should visually match the frames in the video) as follows:

run() > yolo.track(src=video) > on_predict_start > each tracker calls backend model update(img)

At this point, I save each frame image and the colors match those seen in the video.

However, my confusion arises with the DeepOCSort backend model used for inference. It calls DeepOCSort.update(), which in turn calls self.model.get_features(dets[:, 0:4], img). This get_features() function calls the getCrops() method of the base class baseModelBackend, where individual YOLO detections are cropped.

Here is where the issue occurs. After cropping, each cropped image undergoes a color conversion from BGR to RGB:

# (cv2) BGR 2 (PIL) RGB. The ReID models have been trained with this channel order
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB)

Using cv2.imshow() to display each converted detection, I notice that the images appear bluish. While this does not significantly affect the person tracking results, I still want to confirm if this behavior is expected.

I speculate that the primary influence on tracking is the grayscale luminance variation, especially under extreme lighting conditions where the camera's optical sensor saturation limits come into play.
But the relative movement of pixel series remain consistent for effective tracking I think it is enough to working well.

Could you please clarify if the bluish appearance after the BGR to RGB conversion is intentional, and whether it might impact tracking performance?

Thank you!

All the ReID models have been trained on RGB images. Hence the conversion. When using cv2.imshow(), which by default expects BGR, this causes the image to appear with a bluish tint. As the red information is interpreted as blue and vice versa.