How to add multiple source to deepstream.py?
EscaticZheng opened this issue · 7 comments
In DeepStream-Yolo, I can add changes to deepstream_app_config.txt to add more sources. I don't know the same way to do it in python.
You will need to add the code for it. Take a look on https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test3
You will need to add the code for it. Take a look on https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test3
I successfully modify it. However, when it inference on two stream, FPS from 28Fps to 15Fps. Is there a method that can improve multistream inference performance?
It's the expected result. If the board can process only 28 fps with one source, it will be around 15 fps for 2 sources. You can try the FP16
to have more performance (1.5x +).
You will need to add the code for it. Take a look on https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test3
I successfully modify it. However, when it inference on two stream, FPS from 28Fps to 15Fps. Is there a method that can improve multistream inference performance?
Could you please share the modified code?
You will need to add the code for it. Take a look on https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test3
I successfully modify it. However, when it inference on two stream, FPS from 28Fps to 15Fps. Is there a method that can improve multistream inference performance?
Could you please share the modified code?
well,
1.change SOURCE = '' to SOURCE = []
2.change source_bin = create_uridecode_bin(0, SOURCE, streammux) to
for i in range(len(SOURCE)):
print(SOURCE[i])
source_bin = create_uridecode_bin(i, SOURCE[i], streammux)
3.change if 'file://' in SOURCE: to if 'file://' in SOURCE[0]:
4.change parser.add_argument('-s', '--source', required=True, help='Source stream/file')
to
parser.add_argument('-s', '--source', nargs='+',required=True,help='Source stream/file')
5.Then add tiler display to it:
import math
tiler = Gst.ElementFactory.make('nvmultistreamtiler','nvtiler')
if not tiler:
sys.stderr.write('Unable to create tiler')
sys.exit(1)
tiler_rows = int(math.sqrt(len(SOURCE)))
tiler_columns = int(math.ceil(1.0* len(SOURCE)/tiler_rows))
tiler.set_property('rows',2)
tiler.set_property('columns',1)
tiler.set_property('width',1280)
tiler.set_property('height',720)
tiler.set_property('show-source',1)
pipeline.add(tiler)
converter.link(tiler)
tiler.link(osd)
osd.link(sink)
You will need to add the code for it. Take a look on https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-test3
I successfully modify it. However, when it inference on two stream, FPS from 28Fps to 15Fps. Is there a method that can improve multistream inference performance?
Could you please share the modified code?
well, 1.change SOURCE = '' to SOURCE = [] 2.change source_bin = create_uridecode_bin(0, SOURCE, streammux) to for i in range(len(SOURCE)): print(SOURCE[i]) source_bin = create_uridecode_bin(i, SOURCE[i], streammux) 3.change if 'file://' in SOURCE: to if 'file://' in SOURCE[0]: 4.change parser.add_argument('-s', '--source', required=True, help='Source stream/file') to parser.add_argument('-s', '--source', nargs='+',required=True,help='Source stream/file') 5.Then add tiler display to it: import math tiler = Gst.ElementFactory.make('nvmultistreamtiler','nvtiler') if not tiler: sys.stderr.write('Unable to create tiler') sys.exit(1) tiler_rows = int(math.sqrt(len(SOURCE))) tiler_columns = int(math.ceil(1.0* len(SOURCE)/tiler_rows)) tiler.set_property('rows',2) tiler.set_property('columns',1) tiler.set_property('width',1280) tiler.set_property('height',720) tiler.set_property('show-source',1) pipeline.add(tiler) converter.link(tiler) tiler.link(osd) osd.link(sink)
After following your instructions, I've successfully modified the code to handle multiple sources with DeepStream. However, I encountered an issue where the application can process multiple streams but fails to display multiple video feeds simultaneously. Could you please share the deepstream.py
file or provide guidance on how to resolve this issue? Here is my deepstream.py
.
It's the expected result. If the board can process only 28 fps with one source, it will be around 15 fps for 2 sources. You can try the
FP16
to have more performance (1.5x +).
@marcoslucianops , When I try batch size = 4 in tile, the fps is 30/4 = ~7.x, is the reduce of speed results from the computation, or the synchronisation from the sink?
@EscaticZheng in my case setting sink.set_property('sync',0)
fasten the FPS to one higher than original FPS of that video.