pipeline-parallelism

There are 20 repositories under pipeline-parallelism topic.

  • ColossalAI

    hpcaitech/ColossalAI

    Making large AI models cheaper, faster and more accessible

    Language:Python38.1k3811.6k4.3k
  • microsoft/DeepSpeed

    DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

    Language:Python33.3k3372.6k3.9k
  • bigscience-workshop/petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

    Language:Python8.8k88187477
  • torchgpipe

    kakaobrain/torchgpipe

    A GPipe implementation in PyTorch

    Language:Python784333394
  • PaddlePaddle/PaddleFleetX

    飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。

    Language:Python42423113157
  • Oneflow-Inc/libai

    LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training

    Language:Python377437955
  • Coobiw/MiniGPT4Qwen

    Personal Project: MPP-Qwen14B(Multimodal Pipeline Parallel-Qwen14B). Don't let the poverty limit your imagination! Train your own 14B LLaVA-like MLLM on RTX3090/4090 24GB.

    Language:Jupyter Notebook26441814
  • alibaba/EasyParallelLibrary

    Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

    Language:Python25413949
  • Shenggan/awesome-distributed-ml

    A curated list of awesome projects and papers for distributed training or inference

  • torchpipe/torchpipe

    Boosting DL Service Throughput 1.5-4x by Ensemble Pipeline Serving with Concurrent CUDA Streams for PyTorch/LibTorch Frontend and TensorRT/CVCUDA, etc., Backends

    Language:C++1326712
  • xrsrke/pipegoose

    Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*

    Language:Python7443217
  • AlibabaPAI/DAPPLE

    An Efficient Pipelined Data Parallel Approach for Training Large Model

    Language:Python6712014
  • saareliad/FTPipe

    FTPipe and related pipeline model parallelism research.

    Language:Python41457
  • Shigangli/Chimera

    Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.

    Language:Python37226
  • nawnoes/pytorch-gpt-x

    Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.

    Language:Python29303
  • fanpu/DynPartition

    Official implementation of DynPartition: Automatic Optimal Pipeline Parallelism of Dynamic Neural Networks over Heterogeneous GPU Systems for Inference Tasks

    Language:Python5100
  • garg-aayush/model-parallelism

    Model parallelism for NN architectures with skip connections (eg. ResNets, UNets)

    Language:Python4100
  • torchpipe.github.io

    torchpipe/torchpipe.github.io

    Docs for torchpipe: https://github.com/torchpipe/torchpipe

    Language:MDX4101
  • explcre/pipeDejavu

    pipeDejavu: Hardware-aware Latency Predictable, Differentiable Search for Faster Config and Convergence of Distributed ML Pipeline Parallelism

    Language:Jupyter Notebook3100
  • LER0ever/HPGO

    Development of Project HPGO | Hybrid Parallelism Global Orchestration