Pinned Repositories
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
oneDNN
oneAPI Deep Neural Network Library (oneDNN)
pytorch-fork
Tensors and Dynamic neural networks in Python with strong GPU acceleration
torchao-fork
The torchao repository contains api's and workflows for quantization and pruning gpu models.
torchutils
Torch helper functions
yiliu30's Repositories
yiliu30/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
yiliu30/neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
yiliu30/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
yiliu30/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
yiliu30/torchao-fork
The torchao repository contains api's and workflows for quantization and pruning gpu models.
yiliu30/torchutils
Torch helper functions
yiliu30/accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
yiliu30/ai-pr-reviewer
AI-based Pull Request Summarizer and Reviewer with Chat Capabilities.
yiliu30/auto-awq-fork
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
yiliu30/auto-round
SOTA Weight-only Quantization Algorithm for LLMs
yiliu30/AutoGPTQ-fork
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
yiliu30/gemma.cpp
lightweight, standalone C++ inference engine for Google's Gemma models.
yiliu30/gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
yiliu30/hqq-fork
Official implementation of Half-Quadratic Quantization (HQQ)
yiliu30/intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
yiliu30/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
yiliu30/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
yiliu30/ml-engineering
Machine Learning Engineering Open Book
yiliu30/nn-zero-to-hero
Neural Networks: Zero to Hero
yiliu30/notes
yiliu30/optimum-habana
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
yiliu30/subclass_zoo
yiliu30/tgi
Large Language Model Text Generation Inference
yiliu30/Torch-Fx-Graph-Visualizer
Visualizer for neural network, deep learning and machine learning models
yiliu30/training-operator
Training operators on Kubernetes.
yiliu30/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
yiliu30/tutorials
PyTorch tutorials.
yiliu30/xTuring
Easily build, customize and control your own LLMs
yiliu30/yi
yiliu30/yiliu30.github.io.tmp