Pinned Repositories
aistore
AIStore: scalable storage for AI applications
DeepLearningExamples
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
GenerativeAIExamples
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Megatron-LM
Ongoing research training transformer models at scale
NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
nvidia-container-toolkit
Build and run containers leveraging NVIDIA GPUs
nvidia-docker
Build and run Docker containers leveraging NVIDIA GPUs
open-gpu-kernel-modules
NVIDIA Linux open GPU kernel module source
tensorflow
An Open Source Machine Learning Framework for Everyone
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
NVIDIA Corporation's Repositories
NVIDIA/NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
NVIDIA/warp
A Python framework for high performance GPU simulation and graphics
NVIDIA/nv-ingest
NVIDIA Ingest is an early access set of microservices for parsing hundreds of thousands of complex, messy unstructured PDFs and other enterprise documents into metadata and text to embed into retrieval systems.
NVIDIA/gpu-operator
NVIDIA GPU Operator creates, configures, and manages GPUs in Kubernetes
NVIDIA/cccl
CUDA Core Compute Libraries
NVIDIA/cuda-python
CUDA Python: Performance meets Productivity
NVIDIA/Q2RTX
NVIDIA’s implementation of RTX ray-tracing in Quake II
NVIDIA/gdrcopy
A fast GPU memory copy library based on NVIDIA GPUDirect RDMA technology
NVIDIA/cuda-quantum
C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
NVIDIA/hpc-container-maker
HPC Container Maker
NVIDIA/AgentIQ
The NVIDIA AgentIQ toolkit is an open-source library for efficiently connecting and optimizing teams of AI agents.
NVIDIA/bionemo-framework
BioNeMo Framework: For building and adapting AI models in drug discovery at scale
NVIDIA/k8s-dra-driver-gpu
Dynamic Resource Allocation (DRA) for NVIDIA GPUs in Kubernetes
NVIDIA/Fuser
A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")
NVIDIA/JAX-Toolbox
JAX-Toolbox
NVIDIA/nvtrust
Ancillary open source software to support confidential computing on NVIDIA GPUs
NVIDIA/Megatron-Energon
Megatron's multi-modal data loader
NVIDIA/nim-deploy
A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deployment.
NVIDIA/NeMo-Run
A tool to configure, launch and manage your machine learning experiments.
NVIDIA/gpu-driver-container
The NVIDIA GPU driver container allows the provisioning of the NVIDIA driver through the use of containers.
NVIDIA/TensorRT-Incubator
Experimental projects related to TensorRT
NVIDIA/k8s-nim-operator
An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.
NVIDIA/numba-cuda
The CUDA target for Numba
NVIDIA/spark-rapids-jni
RAPIDS Accelerator JNI For Apache Spark
NVIDIA/cudaqx
Accelerated libraries for quantum-classical computing built on CUDA-Q.
NVIDIA/multi-storage-client
Unified high-performance Python client for object and file stores.
NVIDIA/G-Assist
Help shape the future of Project G-Assist
NVIDIA/KAI-Scheduler
KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale