yikun2019's Stars
langgenius/dify
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
geekan/MetaGPT
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
All-Hands-AI/OpenHands
🙌 OpenHands: Code Less, Make More
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
OpenBMB/ChatDev
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
openai/openai-python
The official Python library for the OpenAI API
Cinnamon/kotaemon
An open-source RAG-based tool for chatting with your documents.
black-forest-labs/flux
Official inference repo for FLUX.1 models
openai/swarm
Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
liguodongiot/llm-action
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
HKUDS/LightRAG
"LightRAG: Simple and Fast Retrieval-Augmented Generation"
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
mamba-org/mamba
The Fast Cross-Platform Package Manager
yangjianxin1/Firefly
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
meta-llama/llama-models
Utilities intended for use with Llama models.
QwenLM/Qwen-Agent
Agent framework and applications built upon Qwen>=2.0, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.
modelscope/ms-swift
Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, ...) and 150+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2.5, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL2, Phi3.5-Vision, GOT-OCR2, ...).
gpt-omni/mini-omni
open-source multimodal large language model that can hear, talk while thinking. Featuring real-time end-to-end speech input and streaming audio output conversational capabilities.
ictnlp/LLaMA-Omni
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
yuweihao/MambaOut
MambaOut: Do We Really Need Mamba for Vision?
microsoft/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
codefuse-ai/Awesome-Code-LLM
[TMLR] A curated list of language modeling researches for code (and other software engineering activities), plus related datasets.
qhjqhj00/MemoRAG
Empowering RAG with a memory-based data interface for all-purpose applications!
bigscience-workshop/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
yyyujintang/Awesome-Mamba-Papers
Awesome Papers related to Mamba.
alibaba/Pai-Megatron-Patch
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
TAG-Research/TAG-Bench
TAG-Bench: A benchmark for table-augmented generation (TAG)
XiudingCai/Awesome-Mamba-Collection
A curated collection of papers, tutorials, videos, and other valuable resources related to Mamba.
bronyayang/Law_of_Vision_Representation_in_MLLMs
Official implementation of the Law of Vision Representation in MLLMs
writer/writing-in-the-margins