OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient. With OneFlow, it is easy to:
- program a model with PyTorch-like API
- scale a model to n-dimensional-parallel execution with the Global Tensor
- accelerate/deploy a model with the Graph Compiler.
- Version 1.0.0 is out!
- Linux
- Python 3.7, 3.8, 3.9, 3.10, 3.11
-
CUDA arch 60 or above
-
CUDA Toolkit version 10.0 or above
-
Nvidia driver version 440.33 or above
OneFlow will work on a minimum supported driver, and any driver beyond. For more information, please refer to CUDA compatibility documentation.
docker pull oneflowinc/oneflow:nightly-cuda11.7
-
(Highly recommended) Upgrade pip
python3 -m pip install --upgrade pip #--user
-
To install latest stable release of OneFlow with CUDA support:
python3 -m pip install oneflow
-
To install nightly release of OneFlow with CPU-only support:
python3 -m pip install --pre oneflow -f https://oneflow-staging.oss-cn-beijing.aliyuncs.com/branch/master/cpu
-
To install nightly release of OneFlow with CUDA support:
python3 -m pip install --pre oneflow -f https://oneflow-staging.oss-cn-beijing.aliyuncs.com/branch/master/cu118
If you are in China, you could run this to have pip download packages from domestic mirror of pypi:
python3 -m pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
For more information on this, please refer to pypi 镜像使用帮助
Clone Source Code
-
git clone https://github.com/Oneflow-Inc/oneflow.git
-
curl https://oneflow-public.oss-cn-beijing.aliyuncs.com/oneflow-src.zip -o oneflow-src.zip unzip oneflow-src.zip
Build OneFlow
-
Install dependencies
apt install -y libopenblas-dev nasm g++ gcc python3-pip cmake autoconf libtool
These dependencies are preinstalled in offical conda environment and docker image, you can use the offical conda environment here or use the docker image by:
docker pull oneflowinc/manylinux2014_x86_64_cuda11.2
-
In the root directory of OneFlow source code, run:
mkdir build cd build
-
Config the project, inside
build
directory:-
If you are in China
config for CPU-only like this:
cmake .. -C ../cmake/caches/cn/cpu.cmake
config for CUDA like this:
cmake .. -C ../cmake/caches/cn/cuda.cmake -DCMAKE_CUDA_ARCHITECTURES=80 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DCUDNN_ROOT_DIR=/usr/local/cudnn
-
If you are not in China
config for CPU-only like this:
cmake .. -C ../cmake/caches/international/cpu.cmake
config for CUDA like this:
cmake .. -C ../cmake/caches/international/cuda.cmake -DCMAKE_CUDA_ARCHITECTURES=80 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -DCUDNN_ROOT_DIR=/usr/local/cudnn
Here the DCMAKE_CUDA_ARCHITECTURES macro is used to specify the CUDA architecture, and the DCUDA_TOOLKIT_ROOT_DIR and DCUDNN_ROOT_DIR macros are used to specify the root path of the CUDA Toolkit and CUDNN.
-
-
Build the project, inside
build
directory, run:make -j$(nproc)
-
Add oneflow to your PYTHONPATH, inside
build
directory, run:source source.sh
Please note that this change is not permanent.
-
Simple validation
python3 -m oneflow --doctor
Please refer to troubleshooting for common issues you might encounter when compiling and running OneFlow.
- Please refer to QUICKSTART
- 中文版请参见 快速上手
- Libai(Toolbox for Parallel Training Large-Scale Transformer Models)
- FlowVision(Toolbox for Computer Vision Datasets, SOTA Models and Utils)
- OneFlow-Models(Outdated)
- OneFlow-Benchmark(Outdated)
-
GitHub issues: any install, bug, feature issues.
-
www.oneflow.org: brand related information.
-
- QQ 群: 331883
- 微信号(加好友入交流群): OneFlowXZS
- 知乎
OneFlow was originally developed by OneFlow Inc and Zhejiang Lab.