intel/intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
PythonApache-2.0
Issues
- 5
plugin init failed
#1377 opened - 3
pip install missing dependencies
#1365 opened - 2
- 3
- 2
Running Stable Diffusion on IPEX CPU has error
#1345 opened - 1
rag plugin init failed if retrieval_type is bm25
#1315 opened - 2
Conflict between ipex and pytorch
#1311 opened - 2
- 3
RuntimeError: Chatbot instance has not been set.
#1308 opened - 6
- 6
- 2
422 Unprocessable Entity using Neural Chat via OpenAI interface with meta--lama/llama-2-7b-chat-hf
#1288 opened - 5
QLoRA on CPU - Example ERROR: "undefined symbol"
#1287 opened - 6
Device does not exist / is not supported error with neuralchat deploy_chatbot_on_xpu notebook
#1276 opened - 5
- 0
- 2
[NeuralChat] Retrieval example failure
#1252 opened - 0
[NeuralChat] Generate fails for LLaVA models
#1244 opened - 1
- 6
- 1
i7-12700H CPU Tests
#1220 opened - 5
- 4
Neural Chat Finetune Mistral Fails
#1181 opened - 2
- 9
- 2
Baichuan2-13B-Chat inference problem
#1148 opened - 4
Support Qwen-1.8B-Chat
#1145 opened - 6
Qwen-14B-Chat inference repeat
#1144 opened - 7
getting error when building from source
#1142 opened - 3
- 4
some issues with model.generate in itrex
#1124 opened - 5
None of examples on README page works
#1117 opened - 1
BF16 Inference
#1115 opened - 2
Deploying on virtual machines?
#1106 opened - 3
- 5
Can't load woq int4 model
#1104 opened - 4
- 10
- 1
- 7
whisper.cpp
#1094 opened - 1
environment problem in qat for stable diffusion
#1093 opened - 2
- 1
Segmentation fault (core dumped)
#1090 opened - 7
Load Quantized model
#1068 opened - 5
- 3
Support Mixtral
#963 opened - 3
failed to run model conversion for qwen-7B
#953 opened - 4
main example for qLoRA fails: AttributeError: 'Model' object has no attribute 'named_parameters'
#951 opened - 6
Segmentation fault when run llm chat
#944 opened - 3
Generating meaningless results
#929 opened