LuckyLhy's Stars
shadowsocks/ShadowsocksX-NG
Next Generation of ShadowsocksX
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
pandao/editor.md
The open source embeddable online markdown editor (component).
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
PaddlePaddle/PaddleNLP
👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification, 🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc.
OpenMOSS/MOSS
An open-source tool-augmented conversational language model from Fudan University
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
liguodongiot/llm-action
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
BlinkDL/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
ccfddl/ccf-deadlines
⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~
baichuan-inc/Baichuan-7B
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
mymusise/ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
shenweichen/DeepCTR-Torch
【PyTorch】Easy-to-use,Modular and Extendible package of deep-learning based CTR models.
allenai/RL4LMs
A modular RL library to fine-tune language models to human preferences
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
bigscience-workshop/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
PKU-Alignment/safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
OpenBMB/VisCPM
[ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列
MichealWayne/books
一些前端/设计相关的书籍(电子书)
RUCAIBox/LLMBox
A comprehensive library for implementing LLMs, including a unified training pipeline and comprehensive model evaluation.
shibing624/dialogbot
dialogbot, provide search-based dialogue, task-based dialogue and generative dialogue model. 对话机器人,基于问答型对话、任务型对话、聊天型对话等模型实现,支持网络检索问答,领域知识问答,任务引导问答,闲聊问答,开箱即用。
glgh/awesome-llm-human-preference-datasets
A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.
XueyangFeng/LLM-Agent-Paper-Digest
papers related to LLM-agent that published on top conferences
OpenBMB/BMPrinciples
A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or laws in the future
jinlanfu/GPTScore
Source Code of Paper "GPTScore: Evaluate as You Desire"
bzhangGo/rmsnorm
Root Mean Square Layer Normalization
ptzhangAlg/RecAlg