/NVIDIA_Jetson_Inference_be

This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)

Primary LanguageMakefile

Watchers

No one’s watching this repository yet.