Pinned Repositories
stable-diffusion-webui
Stable Diffusion web UI
cake
Distributed LLM and StableDiffusion inference for mobile, desktop and server.
ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
localsend
An open-source cross-platform alternative to AirDrop
mlc-llm
Universal LLM Deployment Engine with ML Compilation
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
GLM-4
GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
TriDefender
Config files for my GitHub profile.
WSA-Script
Integrate Magisk root and Google Apps into WSA (Windows Subsystem for Android) with GitHub Actions
TriDefender's Repositories
TriDefender/TriDefender
Config files for my GitHub profile.