/onnx_jetson

Running ONNX inference models on a Jetson device

Primary LanguagePython

ONNX-Jetson

Description

Examples of deploying inference models through onnx on a Jetson device.

Usage

Building an Image

Build a new image from the Dockerfile:

./BUILD-DOCKER-IMAGE.sh # (recommended)

or

docker build -t jetson-onnxruntime-yolov4

Running Scripts

TWO options:

  1. Run the scripts in standalone mode
  2. Execute ./RUN-DOCKER.sh to start a container to run the scripts

Example 1 - Object Detection

  1. Download the YOLOv4 model here, save to ./onnx/

  2. Run the application

  • Standalone:
cd yolov4/
nvidia-docker run -it --rm -v $PWD:/workspace/ --workdir=/workspace/  jetson-onnxruntime-yolov4 python3 yolov4.py
  • In the container:
cd ~/ros_ws/src/glozzom/yolov4/
python3 yolov4.py

Example 2 - Classifier

  1. Download the fruits dataset here, save to ./data/

  2. Extract the dataset into ./data/fruits

  3. Download the ViT model here, save to ./onnx/

  4. Run the application

  • Standalone:
cd vit/
nvidia-docker run -it --rm -v $PWD:/workspace/ --workdir=/workspace/  jetson-onnxruntime-yolov4 python3 vit_fruits_man_onnx.py
  • In the container:
cd ~/ros_ws/src/glozzom/vit/
python3 vit_fruits_man_onnx.py