Pinned Repositories
ggml
Tensor library for machine learning
NeMo
NeMo: a toolkit for conversational AI
odin
Lightweight Machine Learning Framework for workflows with Kubernetes
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
sample-odin-configs
Sample configs for setting up Odin locally
sample-odin-pipelines
Some sample pipelines with odin
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
triton-client
Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
vecxx
Interactions's Repositories
Interactions-AI/odin
Lightweight Machine Learning Framework for workflows with Kubernetes
Interactions-AI/vecxx
Interactions-AI/sample-odin-configs
Sample configs for setting up Odin locally
Interactions-AI/sample-odin-pipelines
Some sample pipelines with odin
Interactions-AI/espnet
End-to-End Speech Processing Toolkit
Interactions-AI/ggml
Tensor library for machine learning
Interactions-AI/NeMo
NeMo: a toolkit for conversational AI
Interactions-AI/NeMo-I
NeMo: a toolkit for conversational AI
Interactions-AI/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Interactions-AI/riva-asrlib-decoder
Standalone implementation of the CUDA-accelerated WFST Decoder available in Riva
Interactions-AI/silero-vad
Silero VAD: pre-trained enterprise-grade Voice Activity Detector
Interactions-AI/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Interactions-AI/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
Interactions-AI/triton-client
Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
Interactions-AI/triton-server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.