This is part of cpp-ml-server project that holds the models configurations for Triton Inference Engine.
Please read how to use different model sources and model configurations with Triton Inference Server Guide
- Load and export Timm Classification Model to ONNX
// Modify the model name in the script then run
python3 export_onnx.py
- Imagenet Classification Static
- Name:
imagenet_classification_static
- Max batch size: 1
- Model origin:
timm/efficientnetv2_rw_s
- Name: