Inquiry about Default Video Buffer Format (I420) in DSL and Its Impact on Memory Efficiency Compared to RGBA
YoungjaeDev opened this issue · 5 comments
Hello
I am currently working with the DeepStream Service Library (DSL) and have a few questions regarding the default video buffer format used in DSL
-
Reason for Defaulting to I420: Could you provide insights into why I420 is chosen as the default video buffer format in DSL? Is DSL following this same default format due to some specific advantages or compatibility reasons? Is it just that the deepstream adopts I420?
-
Memory Efficiency of I420 vs. RGBA: I am particularly interested in understanding the memory efficiency of using I420 compared to RGBA. Is there a significant difference in memory usage between these two formats in practical applications within DSL?
-
Format Conversion for Primary GIE: It's my understanding that the primary GIE in DeepStream supports formats like
RGB, BGR, and GRAY
. Does this mean that there is an automatic format conversion at the input and then a reversion back to the base buffer format at the output?
Thank you for your time and assistance.
Best regards,
@youngjae-avikus the default is actually NV12
#define DSL_VIDEO_FORMAT_I420 L"I420"
#define DSL_VIDEO_FORMAT_NV12 L"NV12"
#define DSL_VIDEO_FORMAT_RGBA L"RGBA"
#define DSL_VIDEO_FORMAT_DEFAULT DSL_VIDEO_FORMAT_NV12
Keep in mind that the NVIDIA Streammux plugin only excepts NV12 and RGBA buffers
Also, the nvv4l2decoder and other Source plugins default to NV12. Selecting NV12 as the DSL default avoids a conversion on input in most cases.
Conversion on input to the infer-plugin can be done if it is something you need, however, color conversion requires GPU overhead.
https://github.com/prominenceai/deepstream-services-library/blob/master/docs/api-source.md#dsl-video-format-types
Ah, I checked the API document, but it doesn't seem to have been updated...
It seems I was mistaken. I thought that the input for the Primary-GIE must always be converted to RGBA because it accepts color, but are you suggesting that it can also internally handle COLOR_Format from NV12?
@youngjae-avikus thanks for pointing out the incorrect docs... I will update this.
This statement from the nvidia docs seems a little contradictory to me...
The plugin accepts batched NV12/RGBA buffers from upstream. The NvDsBatchMeta structure must already be attached to the Gst Buffers. The low-level library (libnvds_infer) operates on any of INT8 RGB, BGR, or GRAY data
I'm wondering if you might have the opportunity to reach out to NVIDIA for clarification on this matter.