/NVIDIA_Jetson_Inference

This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)

Primary LanguageMakefile