Pinned Repositories
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
ai-innovation-bridge
policy-library-intel-aws
Intel Cloud Optimization Module - AWS Sentinel Policies
terraform-intel-aws-example-app
terraform-intel-aws-mysql
Intel Cloud Optimization Module - AWS RDS MySQL
terraform-intel-aws-postgresql
Intel Cloud Optimization Module - AWS RDS PostgreSQL
terraform-intel-aws-vm
Intel Cloud Optimization Module - AWS VM
terraform-intel-azure-mysql-flexible-server
Intel Cloud Optimization Module - Azure MySQL Flexible Server
terraform-intel-azure-postgresql-flexible-server
Intel Cloud Optimization Module - Azure PostgreSQL Flexible Server
AI-Hackathon