pipeline-parallelism

There are 21 repositories under pipeline-parallelism topic.

  • ColossalAI

    hpcaitech/ColossalAI

    Making large AI models cheaper, faster and more accessible

    Language:Python38.8k3851.7k4.3k
  • microsoft/DeepSpeed

    DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

    Language:Python35.5k3462.8k4.1k
  • bigscience-workshop/petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

    Language:Python9.3k95203525
  • torchgpipe

    kakaobrain/torchgpipe

    A GPipe implementation in PyTorch

    Language:Python818333399
  • PaddlePaddle/PaddleFleetX

    飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。

    Language:Python44223113162
  • Oneflow-Inc/libai

    LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training

    Language:Python390417955
  • Coobiw/MPP-LLaVA

    Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.

    Language:Jupyter Notebook38143420
  • InternLM/InternEvo

    InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.

    Language:Python310108452
  • alibaba/EasyParallelLibrary

    Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

    Language:Python263131049
  • Shenggan/awesome-distributed-ml

    A curated list of awesome projects and papers for distributed training or inference

  • torchpipe/torchpipe

    Serving Inside Pytorch

    Language:C++1456713
  • xrsrke/pipegoose

    Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*

    Language:Python8043218
  • AlibabaPAI/DAPPLE

    An Efficient Pipelined Data Parallel Approach for Training Large Model

    Language:Python7012015
  • Shigangli/Chimera

    Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.

    Language:Python46237
  • saareliad/FTPipe

    FTPipe and related pipeline model parallelism research.

    Language:Python41457
  • nawnoes/pytorch-gpt-x

    Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.

    Language:Python32303
  • fanpu/DynPartition

    Official implementation of DynPartition: Automatic Optimal Pipeline Parallelism of Dynamic Neural Networks over Heterogeneous GPU Systems for Inference Tasks

    Language:Python6100
  • garg-aayush/model-parallelism

    Model parallelism for NN architectures with skip connections (eg. ResNets, UNets)

    Language:Python4100
  • torchpipe.github.io

    torchpipe/torchpipe.github.io

    Docs for torchpipe: https://github.com/torchpipe/torchpipe

    Language:MDX4101
  • explcre/pipeDejavu

    pipeDejavu: Hardware-aware Latency Predictable, Differentiable Search for Faster Config and Convergence of Distributed ML Pipeline Parallelism

    Language:Jupyter Notebook3100
  • LER0ever/HPGO

    Development of Project HPGO | Hybrid Parallelism Global Orchestration