Pinned Repositories
copyofiosched
dash-cookbook
Receipts for creating AI Applications with APIs from DashScope (and friends)!
eval-scope
A streamlined and customizable framework for efficient large model evaluation and performance benchmarking
facechain
FaceChain is a deep-learning toolchain for generating your Digital-Twin.
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
hello
nothing
llama_index
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
liuyhwangyh's Repositories
liuyhwangyh/copyofiosched
liuyhwangyh/dash-cookbook
Receipts for creating AI Applications with APIs from DashScope (and friends)!
liuyhwangyh/eval-scope
A streamlined and customizable framework for efficient large model evaluation and performance benchmarking
liuyhwangyh/facechain
FaceChain is a deep-learning toolchain for generating your Digital-Twin.
liuyhwangyh/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
liuyhwangyh/hello
nothing
liuyhwangyh/llama_index
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
liuyhwangyh/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
liuyhwangyh/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
liuyhwangyh/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs