Pinned Repositories
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
onnx
Open Neural Network Exchange
onnx-tensorrt
ONNX-TensorRT: TensorRT backend for ONNX
onnxruntime
ONNX Runtime: cross-platform, high performance scoring engine for ML models
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
tensorflow
An Open Source Machine Learning Framework for Everyone
tensorflow-onnx
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
tensorflow
An Open Source Machine Learning Framework for Everyone
pranavm-nvidia's Repositories
pranavm-nvidia/TensorRT
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
pranavm-nvidia/llvm-project
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
pranavm-nvidia/onnx
Open Neural Network Exchange
pranavm-nvidia/onnx-tensorrt
ONNX-TensorRT: TensorRT backend for ONNX
pranavm-nvidia/onnxruntime
ONNX Runtime: cross-platform, high performance scoring engine for ML models
pranavm-nvidia/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
pranavm-nvidia/tensorflow
An Open Source Machine Learning Framework for Everyone
pranavm-nvidia/tensorflow-onnx
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
pranavm-nvidia/triton-server-core
The core library and APIs implementing the Triton Inference Server.