Examples of deploying inference models through onnx on a Jetson device.
Build a new image from the Dockerfile:
./BUILD-DOCKER-IMAGE.sh # (recommended)
or
docker build -t jetson-onnxruntime-yolov4
TWO options:
- Run the scripts in standalone mode
- Execute
./RUN-DOCKER.sh
to start a container to run the scripts
-
Download the YOLOv4 model here, save to ./onnx/
-
Run the application
- Standalone:
cd yolov4/
nvidia-docker run -it --rm -v $PWD:/workspace/ --workdir=/workspace/ jetson-onnxruntime-yolov4 python3 yolov4.py
- In the container:
cd ~/ros_ws/src/glozzom/yolov4/
python3 yolov4.py
-
Download the fruits dataset here, save to ./data/
-
Extract the dataset into ./data/fruits
-
Download the ViT model here, save to ./onnx/
-
Run the application
- Standalone:
cd vit/
nvidia-docker run -it --rm -v $PWD:/workspace/ --workdir=/workspace/ jetson-onnxruntime-yolov4 python3 vit_fruits_man_onnx.py
- In the container:
cd ~/ros_ws/src/glozzom/vit/
python3 vit_fruits_man_onnx.py