- Quick Start Instructions
- Convert ONNX model to TAO compatible model
- Run BYOM model through TAO Toolkit
- List of Tested Models
To run the reference TAO Toolkit BYOM converter implementations for TF1 models, follow the steps below:
Before running the examples defined in this repository, install the following items:
Component | Version |
---|---|
python | >=3.6.9 <3.7 |
python3-pip | >21.06 |
nvidia-driver | >455 |
nvidia-pyindex | >=1.0 |
-
Set up the miniconda using the following instructions:
You may follow the instructions in this link to set up a Python conda environment using miniconda.
Once you have followed the instructions to install miniconda, set the Python version in the new conda environment with this command:
conda create -n byom_dev python=3.6
Once you have created this conda environemnt, you may reinstantiate it on any terminal session with this command:
conda activate byom_dev
-
Install python-pip dependencies.
These repositories relies on several third-party Python dependancies, which you can install to your conda using the following command.
pip3 install -r requirements.txt --no-deps
-
Install TensorFlow 1.
Before using the NVIDIA TAO BYOM converter, you must install TensorFlow 1.15.x. Use the following commands to install it.
pip3 install nvidia-pyindex pip3 install nvidia-tensorflow
-
Install the NVIDIA TAO BYOM converter
The NVIDIA TAO BYOM converter is hosted in the official PyPI repository and can be installed using the following command.
pip3 install nvidia-tao-byom
-
Check your installation using the following command.
tao_byom --help
To run the reference TAO Toolkit BYOM converter implementations for TF1 models, follow the steps below:
Before running the examples defined in this repository, install the following items:
Component | Version |
---|---|
python | ==3.8.* |
python3-pip | >21.06 |
nvidia-driver | >455 |
-
Set up the miniconda using the following instructions:
You may follow the instructions in this link to set up a Python conda environment using miniconda.
Once you have followed the instructions to install miniconda, set the Python version in the new conda environment with this command:
conda create -n byom_dev python=3.8
Once you have created this conda environemnt, you may reinstantiate it on any terminal session with this command:
conda activate byom_dev
-
Install python-pip dependencies.
These repositories relies on several third-party Python dependancies, which you can install to your conda using the following command.
pip3 install -r requirements.txt --no-deps
-
Install TensorFlow 2.
Before using the NVIDIA TAO BYOM converter for TF2 classification, you must install Tensorflow 2.9.x package and necessary CUDA related dependencies.
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/ pip3 install tensorflow==2.9.1
-
Install the NVIDIA TAO BYOM converter
The NVIDIA TAO BYOM converter is hosted in the official PyPI repository and can be installed using the following command.
pip3 install nvidia-tao-byom
-
Check your installation using the following command.
tao_byom --help
In this repository, there are currently two main taks: classification and semantic segmentation. Other taks supported by TAO Toolkit, such as Object Detection, will be included in TAO BYOM in the future.
All the examples shown in this repository are from PyTorch. Any other deep learning frameworks that can be exported to ONNX can work as long as the data format
is channel_first (N, C, H, W)
. If you do not wish to go through the export-to-ONNX steps, you can also start with models provided from the ONNX models repo.
Below is a list of considerations before using TAO BYOM.
- The ONNX model must use the
channel_first
data format. - Only classification and semantic segmentation are supported.
- Dynamic input shape is not supported. You must export the ONNX model using the same input shape you will use in TAO Toolkit spec file.
The end-to-end pipeline for running BYOM ResNet18 classification model on the Pascal VOC dataset is shown in this notebok, and MobileNetv3-UNet on the DAGM dataset is shown in this notebok.
Task | Model | Source | Framework | Dataset |
---|---|---|---|---|
Classification | ResNet | timm | PyTorch | ImageNet1K |
torchivsion | PyTorch | |||
ONNX/models | ONNX (MXNet) | |||
EfficientNet | timm | PyTorch | ||
EfficientNet-PyTorch | PyTorch | |||
VGG | torchivsion | PyTorch | ||
ONNX/models | ONNX (MXNet) | |||
MobileNetv2 | timm | PyTorch | ||
torchivsion | PyTorch | |||
ONNX/models | ONNX (MXNet) | |||
SqueezeNet | torchivsion | PyTorch | ||
ONNX/models | ONNX (Caffe2) | |||
ShuffleNet | torchivsion | PyTorch | ||
ONNX/models | ONNX (Caffe2) | |||
CSPDarkNet | timm | PyTorch | ||
DenseNet | torchivsion | PyTorch | ||
ONNX/models | ONNX (Caffe2) | |||
GoogleNet | torchivsion | PyTorch | ||
ONNX/models | ONNX (Caffe2) | |||
Inceptionv3 | torchivsion | PyTorch | ||
EfficientNetv2 | timm | PyTorch | ||
RegNet | torchivsion | PyTorch | ||
ConvNeXt | torchivsion | PyTorch | ||
MobileNetv3 | timm | PyTorch | ||
MobileNetv3 | timm | PyTorch | ImageNet21K | |
ResNext | timm | PyTorch | IG-3.5B | |
Semantic Segmentation | Vanilla UNet | PyTorch-UNet | PyTorch | Carvana |
VGG16-UNet | segmentation_models.pytorch | PyTorch | CamVid | |
ResNet18-UNet | segmentation_models.pytorch | PyTorch |