xuanerrr's Stars
wdndev/llm_interview_note
主要记录大语言大模型(LLMs) 算法(应用)工程师相关的知识及面试题
dvlab-research/Step-DPO
Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"
NVIDIA/GenerativeAIExamples
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
reworkd/AgentGPT
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
modelscope/ms-swift
Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, ...) or 100+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2.5, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL2, Phi3.5-Vision, GOT-OCR2, ...).
modelscope/modelscope-agent
ModelScope-Agent: An agent framework connecting models in ModelScope with the world
AGI-Edgerunners/LLM-Agents-Papers
A repo lists papers related to LLM based agent
AnkerLeng/Cpp-0-1-Resource
C++ 匠心之作 从0到1入门资料
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
FreedomIntelligence/LLMZoo
⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
Joyce94/LLM-RLHF-Tuning
LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA)
jasonvanf/llama-trl
LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA
mjpost/sacrebleu
Reference BLEU implementation that auto-downloads test sets and reports a version string to facilitate cross-lab comparisons
lansinuote/More_Simple_Reinforcement_Learning
hqsiswiliam/persona-adaptive-attention
X-PLUG/ChatPLUG
A Chinese Open-Domain Dialogue System
datawhalechina/learn-nlp-with-transformers
we want to create a repo to illustrate usage of transformers in chinese
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
datawhalechina/daily-interview
Datawhale成员整理的面经,内容包括机器学习,CV,NLP,推荐,开发等,欢迎大家star
hrwleo/dwnlpinterview
Datawhale NLP 面筋
datawhalechina/llm-cookbook
面向开发者的 LLM 入门教程,吴恩达大模型系列课程中文版
forthespada/CS-Books
🔥🔥超过1000本的计算机经典书籍、个人笔记资料以及本人在各平台发表文章中所涉及的资源等。书籍资源包括C/C++、Java、Python、Go语言、数据结构与算法、操作系统、后端架构、计算机系统知识、数据库、计算机网络、设计模式、前端、汇编以及校招社招各种面经~
zixian2021/AI-interview-cards
最完整的AI算法面试题目仓库,1000道,25个类目
piDack/chat_zhenhuan
使用甄嬛语料微调的chatglm
songhaoyu/BoB
The released codes for ACL 2021 paper 'BoB: BERT Over BERT for Training Persona-based Dialogue Models from Limited Personalized Data'
Quantum-Cheese/DeepReinforcementLearning_Pytorch
Pytorch realization of multiple Deep Reinforcement Learning alogrithms(DQN,DDPG,TD3,PPO,A3C...) with openai gym
hiyouga/ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
langchain-ai/langchain
🦜🔗 Build context-aware reasoning applications