deepspeed
There are 90 repositories under deepspeed topic.
InternLM/lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
PKU-Alignment/safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
zjunlp/KnowLM
An Open-sourced Knowledgable Large Language Model Framework.
alibaba/Megatron-LLaMA
Best practice for training LLaMA models in Megatron-LM
antgroup/glake
GLake: optimizing GPU memory management and IO transmission.
LambdaLabsML/distributed-training-guide
Best practices & guides on how to write distributed pytorch training code
Coobiw/MPP-LLaVA
Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
shm007g/LLaMA-Cult-and-More
Large Language Models for All, 🦙 Cult and More, Stay in touch !
Xirider/finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
OpenMOSS/CoLLiE
Collaborative Training of Large Language Models in an Efficient Way
openpsi-project/ReaLHF
Super-Efficient RLHF Training of LLMs with Parameter Reallocation
sunzeyeah/RLHF
Implementation of Chinese ChatGPT
stanleylsx/llms_tool
一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。
bobo0810/LearnDeepSpeed
DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)
git-cloner/llama2-lora-fine-tuning
llama2 finetuning with deepspeed and lora
jackaduma/ChatGLM-LoRA-RLHF-PyTorch
A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the ChatGLM architecture. Basically ChatGPT but with ChatGLM
HomebrewML/revlib
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
CoinCheung/gdGPT
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
OpenCSGs/llm-inference
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
xyjigsaw/LLM-Pretrain-SFT
Scripts of LLM pre-training and fine-tuning (w/wo LoRA, DeepSpeed)
billvsme/train_law_llm
✏️0成本LLM微调上手项目,⚡️一步一步使用colab训练法律LLM,基于microsoft/phi-1_5、chatglm3,包含lora微调,全参微调
saforem2/l2hmc-qcd
Application of the L2HMC algorithm to simulations in lattice QCD.
glb400/Toy-RecLM
A toy large model for recommender system based on LLaMA2/SASRec/Meta's generative recommenders. Besides, note and experiments of official implementation for Meta's generative recommenders.
jackaduma/Alpaca-LoRA-RLHF-PyTorch
A full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Alpaca architecture. Basically ChatGPT but with Alpaca
argonne-lcf/LLM-Inference-Bench
LLM-Inference-Bench
l294265421/my-llm
All about large language models
pszemraj/ai-msgbot
Training & Implementation of chatbots leveraging GPT-like architecture with the aitextgen package to enable dynamic conversations.
liangyuwang/Tiny-DeepSpeed
Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library
5663015/LLMs_train
一套代码指令微调大模型
nawnoes/pytorch-gpt-x
Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.
saforem2/ezpz
Train across all your devices, ezpz 🍋
Beomi/transformers-language-modeling
Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3
VodLM/vod
End-to-end training of Retrieval-Augmented LMs (REALM, RAG)
wangclnlp/DeepSpeed-Chat-Extension
This repo contains some extensions of deepspeed-chat for fine-tuning LLMs (SFT+RLHF).
dyedd/deepspeed-diffusers
🚀 原生使用 Deepspeed 训练 Diffusers | Native Training of Diffusers with Deepspeed
Raumberg/myllm
Multi-node distributed LLM training framework