/oneflow

OneFlow is a performance-centered and open-source deep learning framework.

Primary LanguageC++Apache License 2.0Apache-2.0

OneFlow

OneFlow is a performance-centered and open-source deep learning framework.

Simple CI Nightly Docker Image Nightly Release Documentation

Latest News

  • Version 0.5.0 is out!
    • First class support for eager execution. The deprecated APIs are moved to oneflow.compatible.single_client
    • Drop-in replacement of import torch for existing Pytorch projects. You could test it by inter-changing import oneflow as torch and import torch as flow.
    • Full changelog

Install OneFlow

System Requirements

  • Python 3.6, 3.7, 3.8, 3.9

  • (Highly recommended) Upgrade pip

    python3 -m pip install --upgrade pip #--user
    
  • CUDA Toolkit Linux x86_64 Driver

    • CUDA runtime is statically linked into OneFlow. OneFlow will work on a minimum supported driver, and any driver beyond. For more information, please refer to CUDA compatibility documentation.

    • Please upgrade your Nvidia driver to version 440.33 or above and install OneFlow for CUDA 10.2 if possible.

Install with Pip Package

  • To install latest stable release of OneFlow with CUDA support:

    python3 -m pip install -f https://release.oneflow.info oneflow==0.5.0+cu102
  • To install nightly release of OneFlow with CUDA support:

    python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/cu102
  • To install other available builds for different variants:

    • Stable
      python3 -m pip install --find-links https://release.oneflow.info oneflow==0.5.0+[PLATFORM]
    • Nightly
      python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/[PLATFORM]
      
    • All available [PLATFORM]:
      Platform CUDA Driver Version Supported GPUs
      cu112 >= 450.80.02 GTX 10xx, RTX 20xx, A100, RTX 30xx
      cu111 >= 450.80.02 GTX 10xx, RTX 20xx, A100, RTX 30xx
      cu110, cu110_xla >= 450.36.06 GTX 10xx, RTX 20xx, A100
      cu102, cu102_xla >= 440.33 GTX 10xx, RTX 20xx
      cu101, cu101_xla >= 418.39 GTX 10xx, RTX 20xx
      cu100, cu100_xla >= 410.48 GTX 10xx, RTX 20xx
      cpu N/A N/A
  • If you are in China, you could run this to have pip download packages from domestic mirror of pypi:

    python3 -m pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
    

    For more information on this, please refer to pypi 镜像使用帮助

Use docker image

docker pull oneflowinc/oneflow:nightly-cuda10.2
docker pull oneflowinc/oneflow:nightly-cuda11.1

Build from Source

Clone Source Code
Build OneFlow
  • Please refer to this repo

    • Pull a docker image:

      docker pull oneflowinc/oneflow-manylinux2014-cuda10.2:0.1
      

      All images available : https://hub.docker.com/u/oneflowinc

    • In the root directory of OneFlow source code, run:

      python3 docker/package/manylinux/build_wheel.py --inplace --python_version=3.6
      

      This should produce .whl files in the directory wheelhouse

    • If you are in China, you might need to add these flags:

      --use_tuna --use_system_proxy --use_aliyun_mirror
      
    • You can choose CUDA/Python versions of wheel by adding:

      --cuda_version=10.1 --python_version=3.6,3.7
      
    • For more useful flags, plese run the script with flag --help or refer to the source code of the script.

  • Option 3: Build on bare metal

    • Install dependencies

      • on Ubuntu 20.04, run:
        sudo apt install -y libopenblas-dev nasm g++ gcc python3-pip cmake autoconf libtool
        
      • on macOS, run:
        brew install nasm
        
    • In the root directory of OneFlow source code, run:

      mkdir build
      cd build
      
    • Config the project, inside build directory:

      • If you are in China

        run this to config for CUDA:

        cmake .. -C ../cmake/caches/cn/cuda.cmake
        

        run this to config for CPU-only:

        cmake .. -C ../cmake/caches/cn/cpu.cmake
        
      • If you are not in China

        run this to config for CUDA:

        cmake .. -C ../cmake/caches/international/cuda.cmake
        

        run this to config for CPU-only:

        cmake .. -C ../cmake/caches/international/cpu.cmake
        
    • Build the project, inside build directory, run:

      make -j$(nproc)
      
    • Add oneflow to your PYTHONPATH, inside build directory, run:

      source source.sh
      

      Please note that this change is not permanent.

    • Simple validation

      python3 -m oneflow --doctor
      

Troubleshooting

Please refer to troubleshooting for common issues you might encounter when compiling and running OneFlow.

Advanced features

XRT
  • You can check this doc to obtain more details about how to use XLA and TensorRT with OneFlow.

Getting Started

Documentation

Model Zoo and Benchmark

Communication

The Team

OneFlow was originally developed by OneFlow Inc and Zhejiang Lab.

License

Apache License 2.0