提交 8230ea2d 编写于 作者: Z zhanghaolong 提交者: Xinran Xu

Update README.md

上级 74180979
# MegEngine
![MegEngine Logo](logo.png)
<p align="center">
<img width="250" height="109" src="logo.png">
</p>
English | [中文](README_CN.md)
......@@ -10,95 +12,61 @@ MegEngine is a fast, scalable and easy-to-use deep learning framework, with auto
## Installation
**NOTE:** MegEngine now only supports Linux platform with Python 3.5 or higher. On Windows 10 you could try [WSL(Windows Subsystem for Linux)](https://docs.microsoft.com/en-us/windows/wsl) to use Linux within Windows.
**NOTE:** MegEngine now supports Linux-64bit/Windows-64bit/MacOS-10.14+ (CPU-Only) Platforms with Python from 3.5 to 3.8. On Windows 10 you can either install the Linux distribution through [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl) or install the Windows distribution directly.
### Binaries
Commands to install from binaries via pip wheels are as follows:
```bash
pip3 install megengine -f https://megengine.org.cn/whl/mge.html
python3 -m pip install megengine -f https://megengine.org.cn/whl/mge.html
```
## Build from Source
### Prerequisites
Most of the dependencies of MegEngine are located in `third_party` directory, and you do
not need to install these by yourself. you can prepare these repositories by executing:
Most of the dependencies of MegEngine are located in `third_party` directory, which can be prepared by executing:
```bash
./third_party/prepare.sh
./third_party/install-mkl.sh
```
But some dependencies should be manually installed:
But some dependencies need to be Installed manually:
* [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6)are required when building MegEngine with CUDA support (default ON)
* [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support (default ON)
* LLVM/Clang(>=6.0) is required when building with Halide JIT support (default ON)
* Python(>=3.5), Numpy, SWIG(>=3.0) are required to build Python modules. (default ON)
* [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6)are required when building MegEngine with CUDA support.
* [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) is required when building with TensorRT support.
* LLVM/Clang(>=6.0) is required when building with Halide JIT support.
* Python(>=3.5), Numpy, are required to build Python modules.
### Build
MegEngine prefers `Out-Of-Source` flavor, and compile in a `mostly-static` way.
Here are the instructions:
1. Make a directory for the build.
```bash
mkdir -p build
cd build
```
2. Generate build configurations by `CMake`.
For CUDA build:
```bash
cmake .. -DMGE_WITH_TEST=ON
```
For CPU only build, use `-DMGE_WITH_CUDA=OFF`:
```bash
cmake .. -DMGE_WITH_CUDA=OFF -DMGE_WITH_TEST=ON
```
For deployment with C++ only, use `-DMGE_INFERENCE_ONLY=ON`, and turn off test with `-DMGE_WITH_TEST=OFF`:
```bash
cmake .. -DMGE_INFERENCE_ONLY=ON -DMGE_WITH_TEST=OFF
```
Use `-DCMAKE_INSTALL_PREFIX=YOUR_PATH` to specify the install path.
3. Start to build.
```bash
make -j$(nproc)
```
4. [optional] Install the library if compiled for deployment at step 2.
```bash
make install
```
Here are some other useful options for the build.
* `MGE_ARCH` specifies which arch MegEngine are building for. (default AUTO)
* `MGE_WITH_DISTRIBUTED` if multiple machine distributed support is enabled. (default ON)
* `MGE_WITH_PYTHON_MODULE` if build python module. (default ON)
* `MGE_BLAS` chooses `MKL` or `OpenBLAS` as BLAS library for MegEngine. (default `MKL`)
* `MGE_CUDA_GENCODE` supplies the `-gencode` option for `nvcc`. (default not supply)
* `MGE_DISABLE_FLOAT16` if disable float16 support. (default OFF)
* `MGE_ENABLE_EXCEPTIONS` if enable exception support in C++. (default ON)
* `MGE_ENABLE_LOGGING` if enable logging in MegEngine. (default AUTO)
More options can be found by:
```bash
cd build
cmake -LAH .. 2>/dev/null| grep -B 1 'MGE_' | less
```
MegEngine uses CMake as the build tool.
We provide the following scripts to facilitate building.
* [host_build.sh](scripts/cmake-build/host_build.sh) is to build MegEngine targeted to run on the same host machine.
Please run the following command to get help information:
```
scripts/cmake-build/host_build.sh -h
```
* [cross_build_android_arm_inference.sh](scripts/cmake-build/cross_build_android_arm_inference.sh) is to build MegEngine targeted to run at Android-ARM platforms.
Please run the following command to get help information:
```
scripts/cmake-build/cross_build_android_arm_inference.sh -h
```
* [cross_build_linux_arm_inference.sh](scripts/cmake-build/cross_build_linux_arm_inference.sh) is to build MegEngine targeted to run at Linux-ARM platforms.
Please run the following command to get help information:
```
scripts/cmake-build/cross_build_linux_arm_inference.sh -h
```
* [cross_build_ios_arm_inference.sh](scripts/cmake-build/cross_build_ios_arm_inference.sh) is to build MegEngine targeted to run iphone/iPad platforms.
Please run the following command to get help information:
```
scripts/cmake-build/cross_build_ios_arm_inference.sh
```
Please refer to [BUILD_README.md](scripts/cmake-build/BUILD_README.md) for more details.
## How to Contribute
......@@ -124,7 +92,7 @@ We believe we can build an open and friendly community and power humanity with A
* Issue: [github.com/MegEngine/MegEngine/issues](https://github.com/MegEngine/MegEngine/issues)
* Email: [megengine-support@megvii.com](mailto:megengine-support@megvii.com)
* Forum: [discuss.megengine.org.cn](https://discuss.megengine.org.cn)
* QQ: 1029741705
* QQ Group: 1029741705
* OPENI: [openi.org.cn/MegEngine](https://www.openi.org.cn/html/2020/Framework_0325/18.html)
## Resources
......
# MegEngine
![MegEngine Logo](logo.png)
<p align="center">
<img width="250" height="109" src="logo.png">
</p>
[English](README.md) | 中文
......@@ -11,14 +13,14 @@ MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深
## 安装说明
**注意:** MegEngine 现在仅支持 Linux 平台安装,以及 Python3.5 及以上的版本(不支持 Python2 )。对于 Windows 10 用户,可以通过安装 [WSL(Windows Subsystem for Linux)](https://docs.microsoft.com/en-us/windows/wsl) 进行体验
**注意:** MegEngine 现在支持 Linux-64bit/Windows-64bit/macos-10.14及其以上 (MacOS只支持cpu) 平台安装,支持Python3.5 到 Python3.8。对于 Windows 10 用户,可以通过安装 [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl) 进行体验,同时我们也原生支持Windows
### 通过包管理器安装
通过 pip 安装的命令如下:
```bash
pip3 install megengine -f https://megengine.org.cn/whl/mge.html
python3 -m pip install megengine -f https://megengine.org.cn/whl/mge.html
```
## 通过源码编译安装
......@@ -34,69 +36,41 @@ $ ./third_party/install-mkl.sh
但是有一些依赖需要手动安装:
* [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) ,如果需要编译支持 CUDA 的版本(默认开启)
* [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) ,如果需要编译支持 TensorRT 的版本(默认开启)
* LLVM/Clang(>=6.0) ,如果需要编译支持 Halide JIT 的版本(默认开启)
* Python(>=3.5), Numpy, SWIG(>=3.0) ,如果需要编译生成 Python 模块(默认开启)
* [CUDA](https://developer.nvidia.com/cuda-toolkit-archive)(>=10.1), [cuDNN](https://developer.nvidia.com/cudnn)(>=7.6) ,如果需要编译支持 CUDA 的版本
* [TensorRT](https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/index.html)(>=5.1.5) ,如果需要编译支持 TensorRT 的版本
* LLVM/Clang(>=6.0) ,如果需要编译支持 Halide JIT 的版本(默认开启)
* Python(>=3.5), Numpy, SWIG(>=3.0) ,如果需要编译生成 Python 模块
### 开始编译
MegEngine 遵循“源外构建”([Out-of-Source Build](https://zh.m.wikibooks.org/zh-hans/CMake_%E5%85%A5%E9%96%80/Out-of-source_Build))原则,并且使用静态编译方式。编译的具体流程如下:
MegEngine使用CMake作为构建工具。我们提供以下脚本来帮助编译:
* [host_build.sh](scripts/cmake-build/host_build.sh) 用于本地编译。
参数 -h 可用于查询脚本支持的参数:
```
scripts/cmake-build/host_build.sh -h
```
* [cross_build_android_arm_inference.sh](scripts/cmake-build/cross_build_android_arm_inference.sh) 用于ARM-安卓交叉编译。
参数 -h 可用于查询脚本支持的参数:
```
scripts/cmake-build/cross_build_android_arm_inference.sh -h
```
* [cross_build_linux_arm_inference.sh](scripts/cmake-build/cross_build_linux_arm_inference.sh) 用于ARM-Linux交叉编译。
参数 -h 可用于查询脚本支持的参数:
```
scripts/cmake-build/cross_build_linux_arm_inference.sh -h
```
* [cross_build_ios_arm_inference.sh](scripts/cmake-build/cross_build_ios_arm_inference.sh) 用于IOS交叉编译。
参数 -h 可用于查询脚本支持的参数:
```
scripts/cmake-build/cross_build_ios_arm_inference.sh
```
更多细节请参考 [BUILD_README.md](scripts/cmake-build/BUILD_README.md)
1. 创建用于编译的目录:
```bash
mkdir -p build
cd build
```
2. 使用 `CMake` 生成编译配置:
生成支持 CUDA 环境的配置:
```bash
cmake .. -DMGE_WITH_TEST=ON
```
生成仅支持 CPU 环境的配置,使用 `-DMGE_WITH_CUDA=OFF` 选项:
```bash
cmake .. -DMGE_WITH_CUDA=OFF -DMGE_WITH_TEST=ON
```
生成仅用于 C++ 环境部署的配置,使用 `-DMGE_INFERENCE_ONLY=ON` ,并可用 `-DMGE_WITH_TEST=OFF` 关闭测试:
```bash
cmake .. -DMGE_INFERENCE_ONLY=ON -DMGE_WITH_TEST=OFF
```
可以使用 `-DCMAKE_INSTALL_PREFIX=YOUR_PATH` 指定具体安装目录。
3. 开始编译:
```bash
make -j$(nproc)
```
4. [可选] 如果需要用于部署,可以安装 MegEngine 的 C++ 库:
```bash
make install
```
以下是其它常用编译选项:
* `MGE_ARCH` 指定编译的目标平台(默认自动检测当前平台)
* `MGE_WITH_DISTRIBUTED` 是否开启多机分布式支持(默认开启)
* `MGE_WITH_PYTHON_MODULE` 是否编译生成 Python 模块(默认开启)
* `MGE_BLAS` 选择 BLAS 的后端实现,可以是 `MKL``OpenBLAS` (默认 `MKL`
* `MGE_CUDA_GENCODE` 指定提供给 `nvcc``-gencode` 选项(默认不指定)
* `MGE_DISABLE_FLOAT16` 是否不提供 `float16` 类型支持(默认关闭)
* `MGE_ENABLE_EXCEPTIONS` 是否开启 C++ 报错支持(默认开启)
* `MGE_ENABLE_LOGGING` 是否开启 MegEngine 日志信息(默认自动检测)
更多选项可以通过以下命令查看:
```bash
cd build
cmake -LAH .. 2>/dev/null| grep -B 1 'MGE_' | less
```
## 如何参与贡献
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册