Dreamerzp's Stars
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
chatchat-space/Langchain-Chatchat
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
microsoft/Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
PaddlePaddle/PaddleNLP
Easy-to-use and powerful LLM and SLM library with awesome model zoo.
togethercomputer/OpenChatKit
LianjiaTech/BELLE
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
yizhongw/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
Facico/Chinese-Vicuna
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
esbatmop/MNBVC
MNBVC(Massive Never-ending BT Vast Chinese corpus)超大规模中文语料集。对标chatGPT训练的40T数据。MNBVC数据集不但包括主流文化,也包括各个小众文化甚至火星文的数据。MNBVC数据集包括新闻、作文、小说、书籍、杂志、论文、台词、帖子、wiki、古诗、歌词、商品介绍、笑话、糗事、聊天记录等一切形式的纯文本中文数据。
mymusise/ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
hiyouga/ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
hyunwoongko/transformer
Transformer: PyTorch Implementation of "Attention Is All You Need"
PhoebusSi/Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
unit-mesh/unit-minions
《AI 研发提效:自己动手训练 LoRA》,包含 Llama (Alpaca LoRA)模型、ChatGLM (ChatGLM Tuning)相关 Lora 的训练。训练内容:用户故事生成、测试代码生成、代码辅助生成、文本转 SQL、文本生成代码……
MediaBrain-SJTU/MING
明医 (MING):中文医疗问诊大模型
RiseInRose/MiniGPT-4-ZH
MiniGPT-4 中文部署翻译 完善部署细节
lich99/ChatGLM-finetune-LoRA
Code for fintune ChatGLM-6b using low-rank adaptation (LoRA)
yanqiangmiffy/InstructGLM
ChatGLM-6B 指令学习|指令数据|Instruct
huang1332/finetune_dataset_maker
为ChatGLM设计的微调数据集生成工具,速来制作自己的猫娘。
liangwq/Chatglm_lora_multi-gpu
chatglm多gpu用deepspeed和
sunzeyeah/RLHF
Implementation of Chinese ChatGPT
Ulov888/chatpdflike
an approximate implementation similar to chatpdf
dotvignesh/PDFChat
The PDFChat app allows you to chat with your PDF files in natural language.
lxe/llama-tune
LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers
unicornlaunching/langchain-and-elevenlabs-with-pdf-analysis
How might we mix OpenAI and Langchain and ElevenLabs to speak out responses to prompts using a body of knowledge encapsulated in PDFs?
jakedahn/cloudflare-agent-qa
Experimenting with langchain, FAISS, OpenAI Embeddings, and GPT-3
tqfang/comet-deepspeed
Train large COMET (T5-3B/GPT2-XL) with small memory (on 11GB memory GPUs like 1080/2080) using DeepSpeed.
Zanejins/langchain-zh
langchain document in Chinese
williamgay25/chatPDF
chat with pdfs using langchain