README.md

    OneFlow

    OneFlow is a performance-centered and open-source deep learning framework.

    Simple CI Nightly Docker Image Nightly Release Documentation

    Latest News

    • Version 0.5.0 is out!
      • First class support for eager execution. The deprecated APIs are moved to oneflow.compatible.single_client
      • Drop-in replacement of import torch for existing Pytorch projects. You could test it by inter-changing import oneflow as torch and import torch as flow.
      • Full changelog

    Install OneFlow

    System Requirements

    • Python 3.6, 3.7, 3.8, 3.9

    • (Highly recommended) Upgrade pip

      python3 -m pip install --upgrade pip #--user
    • CUDA Toolkit Linux x86_64 Driver

      • CUDA runtime is statically linked into OneFlow. OneFlow will work on a minimum supported driver, and any driver beyond. For more information, please refer to CUDA compatibility documentation.

      • Please upgrade your Nvidia driver to version 440.33 or above and install OneFlow for CUDA 10.2 if possible.

    Install with Pip Package

    • To install latest stable release of OneFlow with CUDA support:

      python3 -m pip install -f https://release.oneflow.info oneflow==0.5.0+cu102
    • To install nightly release of OneFlow with CUDA support:

      python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/cu102
    • To install other available builds for different variants:

      • Stable
        python3 -m pip install --find-links https://release.oneflow.info oneflow==0.5.0+[PLATFORM]
      • Nightly
        python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/[PLATFORM]
      • All available [PLATFORM]:
        Platform CUDA Driver Version Supported GPUs
        cu112 >= 450.80.02 GTX 10xx, RTX 20xx, A100, RTX 30xx
        cu111 >= 450.80.02 GTX 10xx, RTX 20xx, A100, RTX 30xx
        cu110, cu110_xla >= 450.36.06 GTX 10xx, RTX 20xx, A100
        cu102, cu102_xla >= 440.33 GTX 10xx, RTX 20xx
        cu101, cu101_xla >= 418.39 GTX 10xx, RTX 20xx
        cu100, cu100_xla >= 410.48 GTX 10xx, RTX 20xx
        cpu N/A N/A
    • If you are in China, you could run this to have pip download packages from domestic mirror of pypi:

      python3 -m pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

      For more information on this, please refer to pypi 镜像使用帮助

    Use docker image

    docker pull oneflowinc/oneflow:nightly-cuda10.2
    docker pull oneflowinc/oneflow:nightly-cuda11.1

    Build from Source

    Clone Source Code
    • Option 1: Clone source code from GitHub

      git clone https://github.com/Oneflow-Inc/oneflow --depth=1
    • Option 2: Download from Aliyun

      If you are in China, please download OneFlow source code from: https://oneflow-public.oss-cn-beijing.aliyuncs.com/oneflow-src.zip

      curl https://oneflow-public.oss-cn-beijing.aliyuncs.com/oneflow-src.zip -o oneflow-src.zip
      unzip oneflow-src.zip
    Build OneFlow
    • Option 1: Build with Conda (recommended)

      Please refer to this repo

    • Option 2: Build in docker container (recommended)

      • Pull a docker image:

        docker pull oneflowinc/oneflow-manylinux2014-cuda10.2:0.1

        All images available : https://hub.docker.com/u/oneflowinc

      • In the root directory of OneFlow source code, run:

        python3 docker/package/manylinux/build_wheel.py --inplace --python_version=3.6

        This should produce .whl files in the directory wheelhouse

      • If you are in China, you might need to add these flags:

        --use_tuna --use_system_proxy --use_aliyun_mirror
      • You can choose CUDA/Python versions of wheel by adding:

        --cuda_version=10.1 --python_version=3.6,3.7
      • For more useful flags, plese run the script with flag --help or refer to the source code of the script.

    • Option 3: Build on bare metal

      • Install dependencies

        • on Ubuntu 20.04, run:
          sudo apt install -y libopenblas-dev nasm g++ gcc python3-pip cmake autoconf libtool
        • on macOS, run:
          brew install nasm
      • In the root directory of OneFlow source code, run:

        mkdir build
        cd build
      • Config the project, inside build directory:

        • If you are in China

          run this to config for CUDA:

          cmake .. -C ../cmake/caches/cn/cuda.cmake

          run this to config for CPU-only:

          cmake .. -C ../cmake/caches/cn/cpu.cmake
        • If you are not in China

          run this to config for CUDA:

          cmake .. -C ../cmake/caches/international/cuda.cmake

          run this to config for CPU-only:

          cmake .. -C ../cmake/caches/international/cpu.cmake
      • Build the project, inside build directory, run:

        make -j$(nproc)
      • Add oneflow to your PYTHONPATH, inside build directory, run:

        source source.sh

        Please note that this change is not permanent.

      • Simple validation

        python3 -m oneflow --doctor

    Troubleshooting

    Please refer to troubleshooting for common issues you might encounter when compiling and running OneFlow.

    Advanced features

    XRT
    • You can check this doc to obtain more details about how to use XLA and TensorRT with OneFlow.

    Getting Started

    3 minutes to run MNIST.
    • Clone the demo code from OneFlow documentation

      git clone https://github.com/Oneflow-Inc/oneflow-documentation.git
      cd oneflow-documentation/cn/docs/single_client/code/quick_start/
    • Run it in Python

      python mlp_mnist.py
    • Oneflow is running and you got the training loss

      2.7290366
      0.81281316
      0.50629824
      0.35949975
      0.35245502
      ...
    • More info on this demo, please refer to doc on quick start.

    Documentation

    Model Zoo and Benchmark

    Communication

    The Team

    OneFlow was originally developed by OneFlow Inc and Zhejiang Lab.

    License

    Apache License 2.0

    项目简介

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/Oneflow-Inc/oneflow

    发行版本 29

    v0.5.0

    全部发行版

    贡献者 72

    全部贡献者

    开发语言

    • C++ 54.3 %
    • Python 36.5 %
    • Cuda 5.9 %
    • C 2.7 %
    • CMake 0.4 %