Pinned Repositories
acat
Assistive Context-Aware Toolkit (ACAT)
ai
Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools from Intel.
cve-bin-tool
The CVE Binary Tool helps you determine if your system includes known vulnerabilities. You can scan binaries for over 350 common, vulnerable components (openssl, libpng, libxml2, expat and others), or if you know the components used, you can get a list of known vulnerabilities associated with an SBOM or a list of components and versions.
haxm
Intel® Hardware Accelerated Execution Manager (Intel® HAXM)
hyperscan
High-performance regular expression matching library
intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
intel-one-mono
Intel One Mono font repository
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
rohd
The Rapid Open Hardware Development (ROHD) framework is a framework for describing and verifying hardware in the Dart programming language.
Intel® Corporation's Repositories
intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
intel/intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
intel/llvm
Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.
intel/compute-runtime
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
intel/media-driver
Intel Graphics Media Driver to support hardware decode, encode and video processing.
intel/isa-l
Intelligent Storage Acceleration Library
intel/gprofiler
gProfiler is a system-wide profiler, combining multiple sampling profilers to produce unified visualization of what your CPU is spending time on.
intel/auto-round
Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.
intel/intel-graphics-compiler
intel/lkp-tests
Linux Kernel Performance tests
intel/linux-intel-lts
Intel LTS kernel, the kernel tree is a reference tree that contains enabling for Intel CPU's that may be up-streamed in a newer kernel version.
intel/intel-xpu-backend-for-triton
OpenAI Triton backend for Intel® GPUs
intel/ScalableVectorSearch
intel/device-modeling-language
intel/onnxruntime
ONNX Runtime: cross-platform, high performance scoring engine for ML models
intel/ecfw-zephyr
intel/torch-xpu-ops
intel/gits
API capture-replay tool for Vulkan, DirectX 12, OpenCL, Intel oneAPI Level Zero, and OpenGL
intel/mainline-tracking
This project is hosting an upstream tracking, rebasing branch of technology and enabling development for selected Intel platforms. It will get updates following most Linus Torvalds RC releases.
intel/sycl-tla
SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs
intel/linux-kernel-overlay
intel/compute-benchmarks
Compute Benchmarks for oneAPI Level Zero and OpenCL™ Driver
intel/tcf
Documentation
intel/linux-intel-quilt
intel/CacheLib
Pluggable in-process caching engine to build and scale high performance services
intel/dml-language-server
intel/program-optimization-advice-exploration-scripts
intel/network-operator
Intel Network Operator allows automatic configuring and easier use of RDMA NICs with Intel AI accelerators in Kubernetes.
intel/aubstream
intel/mfd-const
Modular Framework Design (MFD) module for const values