guicunbin's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
AUTOMATIC1111/stable-diffusion-webui
Stable Diffusion web UI
openai/whisper
Robust Speech Recognition via Large-Scale Weak Supervision
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
binary-husky/gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
gradio-app/gradio
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
reworkd/AgentGPT
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
zhayujie/chatgpt-on-wechat
基于大模型搭建的聊天机器人,同时支持 微信公众号、企业微信应用、飞书、钉钉 等接入,可选择GPT3.5/GPT-4o/GPT-o1/ Claude/文心一言/讯飞星火/通义千问/ Gemini/GLM-4/Claude/Kimi/LinkAI,能处理文本、语音和图片,访问操作系统和互联网,支持基于自有知识库进行定制企业智能客服。
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
amusi/CVPR2024-Papers-with-Code
CVPR 2024 论文和开源项目合集
Tencent/rapidjson
A fast JSON parser/generator for C++ with both SAX/DOM style API
mlc-ai/web-llm
High-performance In-browser LLM Inference Engine
hua1995116/awesome-ai-painting
AI绘画资料合集(包含国内外可使用平台、使用教程、参数教程、部署教程、业界新闻等等) Stable diffusion、AnimateDiff、Stable Cascade 、Stable SDXL Turbo
liguodongiot/llm-action
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
BlinkDL/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
huggingface/text-generation-inference
Large Language Model Text Generation Inference
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
LargeWorldModel/LWM
Large World Model -- Modeling Text and Video with Millions Context
InternLM/lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
ali-vilab/VGen
Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models
modelscope/data-juicer
A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
ModelTC/lightllm
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
ray-project/kuberay
A toolkit to run Ray applications on Kubernetes
triton-inference-server/tensorrtllm_backend
The Triton TensorRT-LLM Backend
Vision-CAIR/MiniGPT4-video
Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding
ExpressAI/AI-Gaokao
Gaokao Benchmark for AI