Pinned Repositories
apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
fast-hadamard-transform
Fast Hadamard transform in CUDA, with a PyTorch interface
flash-attention
Fast and memory-efficient exact attention
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
vllm-fix
flash-attention
Fast and memory-efficient exact attention
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
taichi
Productive, portable, and performant GPU programming in Python.
triton
Development repository for the Triton language and compiler
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
beginlner's Repositories
beginlner/flash-attention
Fast and memory-efficient exact attention
beginlner/apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
beginlner/fast-hadamard-transform
Fast Hadamard transform in CUDA, with a PyTorch interface
beginlner/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
beginlner/vllm-fix