/yolov8-orin

Sample workspace to quickly deploy yolo models on NVIDIA orin

Primary LanguagePython

NVIDIA Orin ultralytics

Sample workspace to quickly deploy yolo models on NVIDIA ORIN.

Dependencies

  • Docker
  • Jetpack 5.1.1
  • Docker compose: apt install docker-compose

Usage

  • Clone this repo git clone --recursive https://github.com/pabsan-0/yolov8-orin
  • Convert model from yolov8 to darknet:
$ docker-compose run --rm ultralytics
# bash dl_weights.sh   # wget weights or get your own
# python3 gen_wts_yolov8.py --size 640 -w yolov8n.pt -o /weights
# rm labels.txt
  • Run deepstream on the converted model (darknet->tensorrt is automatic):
$ cd DeepStream-Yolo
$ ls -l /usr/local/ | grep cuda  ## check cuda version
$ CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo # Compilation needs to be done on host
$ docker-compose run deepstream 
# bash pipe_sh/v4l2-docker.py  

The first time you convert an .engine, dump it in /weights to avoid rebuilds.

Snippets & links

  • Check cuda version ls -l /usr/local | grep cuda
  • Check deepstream version & health: deepstream-app --version
  • Jetpack containers for jetson:

Bugs and limitations

  • Yolov8 may need a few extra dependencies for its built-in engine export
  • Gstreamer pipelines in C do not allow ' characters
  • Deepstream 6.2 sometimes requires specifying video format in pre-nvstreammux caps:
nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12'