pipeline-parallelism
There are 21 repositories under pipeline-parallelism topic.
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
bigscience-workshop/petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
kakaobrain/torchgpipe
A GPipe implementation in PyTorch
PaddlePaddle/PaddleFleetX
飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
Oneflow-Inc/libai
LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
Coobiw/MPP-LLaVA
Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
InternLM/InternEvo
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
alibaba/EasyParallelLibrary
Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.
Shenggan/awesome-distributed-ml
A curated list of awesome projects and papers for distributed training or inference
torchpipe/torchpipe
Serving Inside Pytorch
xrsrke/pipegoose
Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*
AlibabaPAI/DAPPLE
An Efficient Pipelined Data Parallel Approach for Training Large Model
Shigangli/Chimera
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.
saareliad/FTPipe
FTPipe and related pipeline model parallelism research.
nawnoes/pytorch-gpt-x
Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.
fanpu/DynPartition
Official implementation of DynPartition: Automatic Optimal Pipeline Parallelism of Dynamic Neural Networks over Heterogeneous GPU Systems for Inference Tasks
garg-aayush/model-parallelism
Model parallelism for NN architectures with skip connections (eg. ResNets, UNets)
torchpipe/torchpipe.github.io
Docs for torchpipe: https://github.com/torchpipe/torchpipe
explcre/pipeDejavu
pipeDejavu: Hardware-aware Latency Predictable, Differentiable Search for Faster Config and Convergence of Distributed ML Pipeline Parallelism
LER0ever/HPGO
Development of Project HPGO | Hybrid Parallelism Global Orchestration