Deploying Deep Learning
Welcome to our training guide for inference and realtime DNN vision library for NVIDIA Jetson Nano/TX1/TX2/Xavier.
This repo uses NVIDIA TensorRT for efficiently deploying neural networks onto the embedded Jetson platform, improving performance and power efficiency using graph optimizations, kernel fusion, and FP16/INT8 precision.
Vision primitives, such as imageNet
for image recognition, detectNet
for object localization, and segNet
for semantic segmentation, inherit from the shared tensorNet
object. Examples are provided for streaming from live camera feed and processing images. See the API Reference section for detailed reference documentation of the C++ and Python libraries.
There are multiple tracks of the tutorial that you can choose to follow, including Hello AI World for running inference and transfer learning onboard your Jetson, or the full Two Days to a Demo tutorial for training on a PC or server with DIGITS.
It's recommended to walk through the Hello AI World module first to familiarize yourself with machine learning and inference with TensorRT, before proceeding to training in the cloud with DIGITS.
Table of Contents
- Hello AI World
- Two Days to a Demo
- API Reference
- Code Examples
- Pre-trained Models
- System Requirements
- Extra Resources
> Jetson Nano Developer Kit and JetPack 4.2.1 is now supported in the repo.
> See our latest technical blog including benchmarks,Jetson Nano Brings AI Computing to Everyone
.
> Hello AI World now supports Python and onboard training with PyTorch!
Hello AI World
Hello AI World can be run completely onboard your Jetson, including inferencing with TensorRT and transfer learning with PyTorch. The inference portion of Hello AI World - which includes coding your own image classification application for C++ or Python, object detection, and live camera demos - can be run on your Jetson in roughly two hours or less, while transfer learning is best left to leave running overnight.
- Setting up Jetson with JetPack
- Building the Project from Source
- Classifying Images with ImageNet
- Locating Objects with DetectNet
- Transfer Learning with PyTorch
Two Days to a Demo (DIGITS)
The full tutorial includes training in the cloud or PC, and inference on the Jetson with TensorRT, and can take roughly two days or more depending on system setup, downloading the datasets, and the training speed of your GPU.
- DIGITS Workflow
- DIGITS System Setup
- Setting up Jetson with JetPack
- Building the Project from Source
- Classifying Images with ImageNet
- Using the Console Program on Jetson
- Coding Your Own Image Recognition Program
- Running the Live Camera Recognition Demo
- Re-Training the Network with DIGITS
- Downloading Image Recognition Dataset
- Customizing the Object Classes
- Importing Classification Dataset into DIGITS
- Creating Image Classification Model with DIGITS
- Testing Classification Model in DIGITS
- Downloading Model Snapshot to Jetson
- Loading Custom Models on Jetson
- Locating Objects with DetectNet
- Detection Data Formatting in DIGITS
- Downloading the Detection Dataset
- Importing the Detection Dataset into DIGITS
- Creating DetectNet Model with DIGITS
- Testing DetectNet Model Inference in DIGITS
- Downloading the Detection Model to Jetson
- DetectNet Patches for TensorRT
- Detecting Objects from the Command Line
- Multi-class Object Detection Models
- Running the Live Camera Detection Demo on Jetson
- Semantic Segmentation with SegNet
API Reference
Below are links to reference documentation for the C++ and Python libraries from the repo:
jetson-inference
C++ | Python | |
---|---|---|
Image Recognition | imageNet |
imageNet |
Object Detection | detectNet |
detectNet |
Segmentation | segNet |
(coming soon) |
jetson-utils
These libraries are able to be used in external projects by linking to libjetson-inference
and libjetson-utils
.
Code Examples
Introductory code walkthroughs of using the library are covered during these steps of the Hello AI World tutorial:
Additional C++ and Python samples for running the networks on static images and live camera streams can be found here:
Images | Camera | |
---|---|---|
C++ (examples ) |
||
Image Recognition | imagenet-console |
imagenet-camera |
Object Detection | detectnet-console |
detectnet-camera |
Segmentation | segnet-console |
segnet-camera |
Python (python/examples ) |
||
Image Recognition | imagenet-console.py |
imagenet-camera.py |
Object Detection | detectnet-console.py |
detectnet-camera.py |
note: for working with numpy arrays, see
cuda-from-numpy.py
andcuda-to-numpy.py
These examples will automatically be compiled while Building the Project from Source, and are able to run the pre-trained models listed below in addition to custom models provided by the user. Launch each example with --help
for usage info.
Pre-trained Models
The project comes with a number of pre-trained models that are available through the Model Downloader tool:
Image Recognition
Network | CLI argument | NetworkType enum |
---|---|---|
AlexNet | alexnet |
ALEXNET |
GoogleNet | googlenet |
GOOGLENET |
GoogleNet-12 | googlenet-12 |
GOOGLENET_12 |
ResNet-18 | resnet-18 |
RESNET_18 |
ResNet-50 | resnet-50 |
RESNET_50 |
ResNet-101 | resnet-101 |
RESNET_101 |
ResNet-152 | resnet-152 |
RESNET_152 |
VGG-16 | vgg-16 |
VGG-16 |
VGG-19 | vgg-19 |
VGG-19 |
Inception-v4 | inception-v4 |
INCEPTION_V4 |
Object Detection
Network | CLI argument | NetworkType enum | Object classes |
---|---|---|---|
SSD-Mobilenet-v1 | ssd-mobilenet-v1 |
SSD_MOBILENET_V1 |
91 (COCO classes) |
SSD-Mobilenet-v2 | ssd-mobilenet-v2 |
SSD_MOBILENET_V2 |
91 (COCO classes) |
SSD-Inception-v2 | ssd-inception-v1 |
SSD_INCEPTION_V2 |
91 (COCO classes) |
DetectNet-COCO-Dog | coco-dog |
COCO_DOG |
dogs |
DetectNet-COCO-Bottle | coco-bottle |
COCO_BOTTLE |
bottles |
DetectNet-COCO-Chair | coco-chair |
COCO_CHAIR |
chairs |
DetectNet-COCO-Airplane | coco-airplane |
COCO_AIRPLANE |
airplanes |
ped-100 | pednet |
PEDNET |
pedestrians |
multiped-500 | multiped |
PEDNET_MULTI |
pedestrians, luggage |
facenet-120 | facenet |
FACENET |
faces |
Semantic Segmentation
Network | CLI argument | NetworkType enum | Classes |
---|---|---|---|
Cityscapes (2048x2048) | fcn-alexnet-cityscapes-hd |
FCN_ALEXNET_CITYSCAPES_HD |
21 |
Cityscapes (1024x1024) | fcn-alexnet-cityscapes-sd |
FCN_ALEXNET_CITYSCAPES_SD |
21 |
Pascal VOC (500x356) | fcn-alexnet-pascal-voc |
FCN_ALEXNET_PASCAL_VOC |
21 |
Synthia (CVPR16) | fcn-alexnet-synthia-cvpr |
FCN_ALEXNET_SYNTHIA_CVPR |
14 |
Synthia (Summer-HD) | fcn-alexnet-synthia-summer-hd |
FCN_ALEXNET_SYNTHIA_SUMMER_HD |
14 |
Synthia (Summer-SD) | fcn-alexnet-synthia-summer-sd |
FCN_ALEXNET_SYNTHIA_SUMMER_SD |
14 |
Aerial-FPV (1280x720) | fcn-alexnet-aerial-fpv-720p |
FCN_ALEXNET_AERIAL_FPV_720p |
2 |
Recommended System Requirements
Training GPU: Maxwell, Pascal, Volta, or Turing-based GPU (ideally with at least 6GB video memory)
optionally, AWS P2/P3 instance or Microsoft Azure N-series
Ubuntu 16.04/18.04 x86_64
Deployment: Jetson Nano Developer Kit with JetPack 4.2 or newer (Ubuntu 18.04 aarch64).
Jetson Xavier Developer Kit with JetPack 4.0 or newer (Ubuntu 18.04 aarch64)
Jetson TX2 Developer Kit with JetPack 3.0 or newer (Ubuntu 16.04 aarch64).
Jetson TX1 Developer Kit with JetPack 2.3 or newer (Ubuntu 16.04 aarch64).
Note that TensorRT samples from the repo are intended for deployment onboard Jetson, however when cuDNN and TensorRT have been installed on the host side, the TensorRT samples in the repo can be compiled for PC.
Extra Resources
In this area, links and resources for deep learning are listed:
- ros_deep_learning - TensorRT inference ROS nodes
- NVIDIA AI IoT - NVIDIA Jetson GitHub repositories
- Jetson eLinux Wiki - Jetson eLinux Wiki
Legacy Links
Since the documentation has been re-organized, below are links mapping the previous content to the new locations.
(click on the arrow above to hide this section)DIGITS Workflow
See DIGITS Workflow
System Setup
See DIGITS Setup
Running JetPack on the Host
See JetPack Setup
Installing Ubuntu on the Host
See DIGITS Setup
Setting up host training PC with NGC container
See DIGITS Setup
Installing the NVIDIA driver
See DIGITS Setup
Installing Docker
See DIGITS Setup
NGC Sign-up
See DIGITS Setup
Setting up data and job directories
See DIGITS Setup
Starting DIGITS container
See DIGITS Setup
Natively setting up DIGITS on the Host
Installing NVIDIA Driver on the Host
Installing cuDNN on the Host
Installing NVcaffe on the Host
Installing DIGITS on the Host
Starting the DIGITS Server
Building from Source on Jetson
See Building the Repo from Source
Cloning the Repo
See Building the Repo from Source
Configuring with CMake
See Building the Repo from Source
Compiling the Project
See Building the Repo from Source
Digging Into the Code
See Building the Repo from Source
Classifying Images with ImageNet
See Classifying Images with ImageNet
Using the Console Program on Jetson
See Classifying Images with ImageNet
Running the Live Camera Recognition Demo
See Running the Live Camera Recognition Demo
Re-training the Network with DIGITS
See Re-Training the Recognition Network
Downloading Image Recognition Dataset
See Re-Training the Recognition Network
Customizing the Object Classes
See Re-Training the Recognition Network
Importing Classification Dataset into DIGITS
See Re-Training the Recognition Network
Creating Image Classification Model with DIGITS
See Re-Training the Recognition Network
Testing Classification Model in DIGITS
See Re-Training the Recognition Network
Downloading Model Snapshot to Jetson
See Downloading Model Snapshots to Jetson
Loading Custom Models on Jetson
See Loading Custom Models on Jetson
Locating Object Coordinates using DetectNet
See Locating Object Coordinates using DetectNet
Detection Data Formatting in DIGITS
See Locating Object Coordinates using DetectNet
Downloading the Detection Dataset
See Locating Object Coordinates using DetectNet
Importing the Detection Dataset into DIGITS
See Locating Object Coordinates using DetectNet
Creating DetectNet Model with DIGITS
See Locating Object Coordinates using DetectNet
Selecting DetectNet Batch Size
See Locating Object Coordinates using DetectNet
Specifying the DetectNet Prototxt
See Locating Object Coordinates using DetectNet
Training the Model with Pretrained Googlenet
See Locating Object Coordinates using DetectNet
Testing DetectNet Model Inference in DIGITS
See Locating Object Coordinates using DetectNet
Downloading the Model Snapshot to Jetson
See Downloading the Detection Model to Jetson
DetectNet Patches for TensorRT
See Downloading the Detection Model to Jetson
Processing Images from the Command Line on Jetson
See Detecting Objects from the Command Line
Launching With a Pretrained Model
See Detecting Objects from the Command Line
Pretrained DetectNet Models Available
See Detecting Objects from the Command Line
Running Other MS-COCO Models on Jetson
See Detecting Objects from the Command Line
Running Pedestrian Models on Jetson
See Detecting Objects from the Command Line
Multi-class Object Detection Models
See Detecting Objects from the Command Line
Running the Live Camera Detection Demo on Jetson
See Running the Live Camera Detection Demo
Image Segmentation with SegNet
See Semantic Segmentation with SegNet
Downloading Aerial Drone Dataset
See Semantic Segmentation with SegNet
Importing the Aerial Dataset into DIGITS
See Semantic Segmentation with SegNet
Generating Pretrained FCN-Alexnet
See Generating Pretrained FCN-Alexnet
Training FCN-Alexnet with DIGITS
See Training FCN-Alexnet with DIGITS
Testing Inference Model in DIGITS
See Training FCN-Alexnet with DIGITS
FCN-Alexnet Patches for TensorRT
See FCN-Alexnet Patches for TensorRT
Running Segmentation Models on Jetson
© 2016-2019 NVIDIA | Table of Contents