Zhenzhong1
ML Engineer, OPEA Contributor, ITREX & NeuralSpeed Developer, Major in HPC & AI, Working in Intel, Graduated from the University of Edinburgh
Intel
Zhenzhong1's Stars
intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
intel/intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
intel/neural-speed
An innovative library for efficient LLM inference via low-bit quantization