/intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

Primary LanguagePythonApache License 2.0Apache-2.0

Intelยฎ Extension for PyTorch*

CPU ๐Ÿ’ปmain branch   |   ๐ŸŒฑQuick Start   |   ๐Ÿ“–Documentations   |   ๐ŸƒInstallation   |   ๐Ÿ’ปLLM Example
GPU ๐Ÿ’ปmain branch   |   ๐ŸŒฑQuick Start   |   ๐Ÿ“–Documentations   |   ๐ŸƒInstallation   |   ๐Ÿ’ปLLM Example

Intelยฎ Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intelยฎ Advanced Vector Extensions 512 (Intelยฎ AVX-512) Vector Neural Network Instructions (VNNI) and Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intelยฎ Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.

ipex.llm - Large Language Models (LLMs) Optimization

In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intelยฎ Extension for PyTorch*. Check LLM optimizations for details.

Optimized Model List

MODEL FAMILY MODEL NAME (Huggingface hub) FP32 BF16 Static quantization INT8 Weight only quantization INT8 Weight only quantization INT4
LLAMA meta-llama/Llama-2-7b-hf ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
LLAMA meta-llama/Llama-2-13b-hf ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
LLAMA meta-llama/Llama-2-70b-hf ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
LLAMA meta-llama/Meta-Llama-3-8B ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ
LLAMA meta-llama/Meta-Llama-3-70B ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸจ
GPT-J EleutherAI/gpt-j-6b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
GPT-NEOX EleutherAI/gpt-neox-20b ๐ŸŸฉ ๐ŸŸจ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸจ
DOLLY databricks/dolly-v2-12b ๐ŸŸฉ ๐ŸŸจ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸจ
FALCON tiiuae/falcon-7b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
FALCON tiiuae/falcon-40b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
OPT facebook/opt-30b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
OPT facebook/opt-1.3b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
Bloom bigscience/bloom-1b7 ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
CodeGen Salesforce/codegen-2B-multi ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
Baichuan baichuan-inc/Baichuan2-7B-Chat ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
Baichuan baichuan-inc/Baichuan2-13B-Chat ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ
Baichuan baichuan-inc/Baichuan-13B-Chat ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸฉ
ChatGLM THUDM/chatglm3-6b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ
ChatGLM THUDM/chatglm2-6b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ
GPTBigCode bigcode/starcoder ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸจ
T5 google/flan-t5-xl ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
MPT mosaicml/mpt-7b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
Mistral mistralai/Mistral-7B-v0.1 ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸจ
Mixtral mistralai/Mixtral-8x7B-v0.1 ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
Stablelm stabilityai/stablelm-2-1_6b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ ๐ŸŸจ
Qwen Qwen/Qwen-7B-Chat ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ ๐ŸŸฉ
LLaVA liuhaotian/llava-v1.5-7b ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
GIT microsoft/git-base ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ
Yuan IEITYuan/Yuan2-102B-hf ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
Phi microsoft/phi-2 ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸฉ ๐ŸŸจ
  • ๐ŸŸฉ signifies that the model can perform well and with good accuracy (<1% difference as compared with FP32).

  • ๐ŸŸจ signifies that the model can perform well while accuracy may not been in a perfect state (>1% difference as compared with FP32).

Note: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and prepacked TPP Linear (fp32/bf16). We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.

In addition, Intelยฎ Extension for PyTorch* introduces module level optimization APIs (prototype feature) since release 2.3.0. The feature provides optimized alternatives for several commonly used LLM modules and functionalities for the optimizations of the niche or customized LLMs. Please read LLM module level optimization practice to better understand how to optimize your own LLM and achieve better performance.

Support

The team tracks bugs and enhancement requests using GitHub issues. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

License

Apache License, Version 2.0. As found in LICENSE file.

Security

See Intel's Security Center for information on how to report a potential security issue or vulnerability.

See also: Security Policy