Pinned Repositories
oneAPI-samples
Samples for Intel® oneAPI Toolkits
cuda-pcl
A project demonstrating how to use the libs of CUDA-PCL.
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
oneAPI-samples
Samples for Intel® oneAPI Toolkits
bopeng1234's Repositories
bopeng1234/oneAPI-samples
Samples for Intel® oneAPI Toolkits
bopeng1234/cuda-pcl
A project demonstrating how to use the libs of CUDA-PCL.