Pinned Repositories
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
grpc
The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
text_upload
warlock135's Repositories
warlock135/grpc
The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)
warlock135/text_upload