zhenwei-intel's Stars
HabanaAI/vllm-fork
A high-throughput and memory-efficient inference and serving engine for LLMs
opea-project/GenAIComps
GenAI components at micro-service level; GenAI service composer to create mega-service
intel/auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
intel/neural-speed
An innovative library for efficient LLM inference via low-bit quantization
MegEngine/InferLLM
a lightweight LLM model inference framework
Cambricon/mlu-ops
Efficient operation implementation based on the Cambricon Machine Learning Unit (MLU) .
numba/numba
NumPy aware dynamic Python compiler using LLVM
intel/intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime