Pinned Repositories
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
267-distil-whisper-asr
iphone_to_pc_photos_scripts
meta-clanton-jay
oe-core
Obsolescent mirror, use "openembedded-core" instead
openvino_install_checks
openvino_training_extensions
Trainable models and NN optimization tools
openvino
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
js333031's Repositories
js333031/267-distil-whisper-asr
js333031/iphone_to_pc_photos_scripts
js333031/meta-clanton-jay
js333031/oe-core
Obsolescent mirror, use "openembedded-core" instead
js333031/openvino_install_checks
js333031/openvino_training_extensions
Trainable models and NN optimization tools