w5688414's Stars
microsoft/graphrag
A modular graph-based Retrieval-Augmented Generation (RAG) system
yangjianxin1/Firefly
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
modelscope/agentscope
Start building LLM-empowered multi-agent applications in an easier way.
InternLM/MindSearch
🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
meta-llama/llama-models
Utilities intended for use with Llama models.
google/gemma_pytorch
The official PyTorch implementation of Google's Gemma models
InternLM/xtuner
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
deepseek-ai/DeepSeek-Coder-V2
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
zou-group/textgrad
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
AlibabaResearch/DAMO-ConvAI
DAMO-ConvAI: The official repository which contains the codebase for Alibaba DAMO Conversational AI.
NVIDIA/NeMo-Aligner
Scalable toolkit for efficient model alignment
mistralai/mistral-common
microsoft/sammo
A library for prompt engineering and optimization (SAMMO = Structure-aware Multi-Objective Metaprompt Optimization)
LLM-Red-Team/metaso-free-api
🚀 秘塔AI搜索逆向API【特长:超强检索超长输出】,支持高速流式输出、超强联网搜索(全网or学术以及简洁、深入、研究三种模式),零配置部署,多路token支持,仅供测试,如需商用请前往官方开放平台。
varunshenoy/super-json-mode
Low latency JSON generation using LLMs ⚡️
inulute/perplexity-ai-app
The Perplexity AI Desktop App, powered by Electron which brings the magic of AI language processing to your desktop.
agi-templar/Stable-Alignment
Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
askaitools/askaitools-community-edition
A cutting-edge search engine project tailored specifically for the AI product
dvlab-research/Step-DPO
Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"
neuml/rag
🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.
dinobby/ReConcile
xbmxb/RAG-query-rewriting
nuochenpku/Awesome-Role-Play-Papers
Awesome papers for role-playing with language models
Vance0124/Token-level-Direct-Preference-Optimization
Reference implementation for Token-level Direct Preference Optimization(TDPO)
Ag2S1/Sibyl-System
matthewrenze/self-reflection
Self-Reflection in LLM Agents: Effects on Problem-Solving Performance
yongchao98/PROMST
Automatic prompt optimization framework for multi-step agent tasks.
amazon-science/comm-prompt
CoMM: Collaborative Multi-Agent, Multi-Reasoning-Path Prompting for Complex Problem Solving (NAACL 2024 Findings))
lukeyoffe/debunc
chandar-lab/SubGoal_Distillation_LLM
Code for paper Sub-goal Distillation: A Method to Improve Small Language Agents, accepted at CoLLAs 2024.