Pinned Repositories
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
vendor-intel-utils
android-internal-study
android-reverse
安卓逆向工具汇总 / Awsome Android Reverse Tools
device-androidia
device-androidia-mixins
gmalg
国密算法sm1,sm2,sm3,sm4算法源码
llama2.c
Inference Llama 2 in one file of pure C
SMx
国家商用加密算法 SMx(SM2,SM3,SM4)
sunyijin's Repositories
sunyijin/android-internal-study
sunyijin/android-reverse
安卓逆向工具汇总 / Awsome Android Reverse Tools
sunyijin/device-androidia
sunyijin/device-androidia-mixins
sunyijin/gmalg
国密算法sm1,sm2,sm3,sm4算法源码
sunyijin/llama2.c
Inference Llama 2 in one file of pure C
sunyijin/SMx
国家商用加密算法 SMx(SM2,SM3,SM4)