A streamlined collection of tools on top of NVIDIA Deepstream 6.1 for Ubuntu Docker.
These are imposed by the base image in the Dockerfile. See deepstream container catalog for more info.
- Docker: nvidia instructions
- Nvidia container toolkit: nvidia instructions
- NVIDIA display driver version 515.65+ **
** Not enforced: NVIDIA driver 510.85 on Ubuntu 18.04 & 20.04 LTS has been seen to work properly
-
Configure features to install by manually editing which
setup/*.sh
scripts run in the Dockerfile:- DeepStream-Yolo: Inference engine for YOLO networks. See its documentation for preparing models. This repo automates building
yolov5n
. - Gst-Daemon: Daemon for gstreamer.
- Gst-Shark: Performance monitor for gstreamer.
- Mqtt-toolkit: Custom library to get NVDS detections through MQTT.
- Pylonsrc: Installs gst-plugins-vision, supporting Basler cameras.
- Yolov5n: Downloads and prepares a Yolov5n model (20Mb) to quickly test stuff, requires DeepStream-Yolo to set up.
- DeepStream-Yolo: Inference engine for YOLO networks. See its documentation for preparing models. This repo automates building
-
Build docker image:
docker build -t nvds-lite .
- This takes something less than 20Gb
-
Optional: Embed an
.engine
file in the docker image to avoid reconstruction at login:- Log into a container:
nvds-lite
:- Place yourself at the dir where the
.engine
will spawn if not found:cd /nvds/assets
- Run DeepStream as usual, engine build will start at cwd:
bash /nvds/samples/v4l2-yolov5-display.bash
- Wait for the engine creation and then interrupt
^C
- Make sure the engine lies at
/nvds/assets
- Place yourself at the dir where the
- Leave the container running and open a terminal at the host computer:
- Find your container id from
docker container ls
- Commit the docker image from the container's id
docker commit CONTAINER nvds-lite
- Find your container id from
- Log into a container:
Use the executable script nvds-lite
to use the app. There's two ways of invoking it:
- Call it without arguments:
nvds-lite
- This will invoke the docker image and leave you in a terminal inside a container.
- The directory from which you called the command is available at
/host
. - This is the default mode, meant for testing applications.
- Point it to a bash script:
nvds-lite run < host_script
- This will run the script you provide (which is hosted on your computer).
- Even in this mode,
/host
still points to the host computer. - You should not use this in the general case:
- This running container does not receive signals (
^C
,^D
) - You might need to uncleanly kill it with
^Z + kill %1
or similar - Meant for quick runs of tested scripts only.
- This running container does not receive signals (
You need DeepStreamYolo and Yolov5 features installed to try these.
- From within container:
(host)$ nvds-lite
(nvds)$ bash /nvds/samples/v4l2-yolov5-display.bash
- From host:
(host)$ nvds-lite run < samples/v4l2-yolov5-display.bash
- From the host, set up a v4l2 loopback first:
- Create dump virtual device:
sudo modprobe v4l2loopback devices=1
- Check what you just created:
v4l2-ctl --list-devices
- Keep this in mind to remove it when done:
sudo modprobe -r v4l2loopback
- Create dump virtual device:
- Now run the provided sample, changing
/dev/videoXX
for your loopback device(host)$ nvds-lite (nvds)$ bash /nvds/samples/v4l2-yolov5-v4l2.bash /dev/videoXX
- Done! Now select your new virtual device on Teams or whatever and enjoy ^^
Besides the supporting linux distro, here's the private subsystem holding these tools together:
/host
: Links the container to the host PC at the directory from which it is run. Avoid this behavior by editing thenvds-lite
launcher./nvds
:/nvds/assets
: Holds supporting files and tools: config files, weights... etc./nvds/lib
: Holds libraries that NVDS needs to be told explicitly about, be it in cfg files or within the pipeline./nvds/opt
: Holds the installation directories of installed tools./nvds/setup
: Carries scripts to install those tools that live in/nvds/opt
/nvds/samples
: A few samples to quickly test Deepstream. These are gstreamer pipelines defined in bash scripts./nvds/utils
: Related, yet not-so-well maintained or documented scripts for various random tasks.
Here's an outlook on how this looks:
/nvds
|-- assets
| |-- coco_config_infer_primary.txt
| |-- coco_labels.txt
| |-- msgconv_config.txt
| |-- rtsp-server/
| |-- yolov5n.cfg
| `-- yolov5n.wts
|-- lib
| |-- libnvds_mqtt_proto.so -> /nvds/opt/libnvds_mqtt_proto/libnvds_mqtt_proto.so
| |-- libnvdsinfer_custom_impl_Yolo.so -> /nvds/opt/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
| `-- nvds_mqtt_proto.so -> /nvds/opt/libnvds_mqtt_proto/libnvds_mqtt_proto.so
|-- opt
| |-- DeepStream-Yolo/
| |-- gst-daemon/
| |-- gst-perf/
| |-- gst-shark/
| |-- libnvds_mqtt_proto/
| `-- yolov5/
|-- samples
| |-- v4l2-yolov5-display.bash
| `-- v4l2-yolov5-v4l2.bash
|-- setup
| |-- deepstream-yolo.sh
| |-- gst-daemon.sh
| |-- gst-perf.sh
| |-- gst-shark.sh
| |-- mqtt-toolkit.sh
| |-- prepare_yolov5n.bash
| `-- pylonsrc.sh
`-- utils
|-- array-test.sh
|-- avg_fps.bash
|-- avg_latency.bash
|-- git-forget-blob.sh
|-- gstd-gui.bash
`-- trace-reader.bash