Pinned Repositories
flashinfer
FlashInfer: Kernel Library for LLM Serving
LLaMA-Factory
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
llama.cpp
LLM inference in C/C++
mem0
The memory layer for Personalized AI
mlc-llm
Universal LLM Deployment Engine with ML Compilation
Mooncake
Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.
segmentation_models.pytorch
Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
sglang
SGLang is yet another fast serving framework for large language models and vision language models.
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
haichuan1221's Repositories
haichuan1221/flashinfer
FlashInfer: Kernel Library for LLM Serving
haichuan1221/llama.cpp
LLM inference in C/C++
haichuan1221/mem0
The memory layer for Personalized AI
haichuan1221/sglang
SGLang is yet another fast serving framework for large language models and vision language models.
haichuan1221/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
haichuan1221/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs