zjulgc's Stars
hiyouga/LLaMA-Factory
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
unslothai/unsloth
Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
it-ebooks-0/geektime-books
:books: 极客时间电子书
huggingface/trl
Train transformer language models with reinforcement learning.
liguodongiot/llm-action
本项目旨在分享大模型相关技术原理以及实战经验。
lm-sys/RouteLLM
A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!
princeton-nlp/SWE-bench
[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?
WangRongsheng/awesome-LLM-resourses
🧑🚀 全世界最好的LLM资料总结 | Summary of the world's best LLM resources.
XueFuzhao/OpenMoE
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
yuanxiaosc/Machine-Learning-Book
《机器学习宝典》包含:谷歌机器学习速成课程(招式)+机器学习术语表(口诀)+机器学习规则(心得)+机器学习中的常识性问题 (内功)。该资源适用于机器学习、深度学习研究人员和爱好者参考!
hzwer/WritingAIPaper
Writing AI Conference Papers: A Handbook for Beginners
google-ai-edge/model-explorer
A modern model graph visualizer and debugger
deepseek-ai/DeepSeek-MoE
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
huybery/Awesome-Code-LLM
👨💻 An awesome and curated list of best code-LLM for research.
facebookresearch/llm-transparency-tool
LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. *Check out demo at* https://huggingface.co/spaces/facebook/llm-transparency-tool-demo
OpenAutoCoder/Agentless
Agentless🐱: an agentless approach to automatically solve software development problems
allenai/OLMoE
OLMoE: Open Mixture-of-Experts Language Models
Leeroo-AI/mergoo
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
liuhuigmail/GrowingBugRepository
A bug repository that keeps growing
TUDB-Labs/mLoRA
An Efficient "Factory" to Build Multiple LoRA Adapters
LLM-Testing/LLM4SoftwareTesting
bigcode-project/bigcodebench
BigCodeBench: Benchmarking Code Generation Towards AGI
liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
GCYZSL/MoLA
amazon-science/mxeval
iSEngLab/AwesomeLLM4APR
A Systematic Literature Review on Large Language Models for Automated Program Repair
TUDB-Labs/MixLoRA
State-of-the-art Parameter-Efficient MoE Fine-tuning Method
FloatAI/humaneval-xl
[LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization
mikecovlee/mLoRA
This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT
TUDB-Labs/MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT