README.md 4.6 KB
Newer Older
Y
Yan Chunwei 已提交
1 2 3
[中文版](./README_cn.md)

# Paddle Lite
A
adaxi123 已提交
4

朔-望's avatar
朔-望 已提交
5
[![Build Status](https://travis-ci.org/PaddlePaddle/paddle-mobile.svg?branch=develop&longCache=true&style=flat-square)](https://travis-ci.org/PaddlePaddle/paddle-mobile)
朔-望's avatar
朔-望 已提交
6 7
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](https://github.com/PaddlePaddle/paddle-mobile/tree/develop/doc)
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
Y
Yan Chunwei 已提交
8
<!-- [![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle-Mobile.svg)](https://github.com/PaddlePaddle/Paddle-Mobile/releases) -->
朔-望's avatar
朔-望 已提交
9

Y
Yan Chunwei 已提交
10
Paddle Lite is an updated version of Paddle-Mobile, an open-open source deep learning framework designed to make it easy to perform inference on mobile and IoT devices. It is compatible with PaddlePaddle and pre-trained models from other sources.
X
Xin Pan 已提交
11

Y
Yan Chunwei 已提交
12
For tutorials, please see [PaddleLite Wiki](https://github.com/PaddlePaddle/paddle-mobile/wiki).
X
Xin Pan 已提交
13

Y
Yan Chunwei 已提交
14
## Key Features
X
Xin Pan 已提交
15

Y
Yan Chunwei 已提交
16
### Light Weight
X
Xin Pan 已提交
17

Y
Yan Chunwei 已提交
18
On mobile devices, execution module can be deployed without third-party libraries, because our excecution module and analysis module are decoupled.
X
Xin Pan 已提交
19

Y
Yan Chunwei 已提交
20
On ARM V7, only 800KB are taken up, while on ARM V8, 1.3MB are taken up with the 80 operators and 85 kernels in the dynamic libraries provided by Paddle Lite.
X
Xin Pan 已提交
21

Y
Yan Chunwei 已提交
22
Paddle Lite enables immediate inference without extra optimization.
朔-望's avatar
朔-望 已提交
23

Y
Yan Chunwei 已提交
24
### High Performance
朔-望's avatar
朔-望 已提交
25

Y
Yan Chunwei 已提交
26
Paddle Lite enables device-optimized kernels, maximizing ARM CPU performance.
朔-望's avatar
朔-望 已提交
27

Y
Yan Chunwei 已提交
28
It also supports INT8 quantizations with [PaddleSlim model compression tools](https://github.com/PaddlePaddle/models/tree/v1.5/PaddleSlim), reducing the size of models and increasing the performance of models.
朔-望's avatar
朔-望 已提交
29

Y
Yan Chunwei 已提交
30
On Huawei NPU and FPGA, the performance is also boosted.
朔-望's avatar
朔-望 已提交
31

Y
Yan Chunwei 已提交
32
### High Compatibility
朔-望's avatar
朔-望 已提交
33

Y
Yan Chunwei 已提交
34
Hardware compatibility: Paddle Lite supports a diversity of hardwares — ARM CPU, Mali GPU, Adreno GPU, Huawei NPU and FPGA. In the near future, we will also support AI microchips from Cambricon and Bitmain.
X
Xin Pan 已提交
35

Y
Yan Chunwei 已提交
36
Model compatibility: The Op of Paddle Lite is fully compatible to that of PaddlePaddle. The accuracy and performance of 18 models (mostly CV models and OCR models) and 85 operators have been validated. In the future, we will also support other models.
X
Xin Pan 已提交
37

Y
Yan Chunwei 已提交
38
Framework compatibility: In addition to models trained on PaddlePaddle, those trained on Caffe and TensorFlow can also be converted to be used on Paddle Lite, via [X2Paddle](https://github.com/PaddlePaddle/X2Paddle). In the future to come, we will also support models of ONNX format.
X
Xin Pan 已提交
39

Y
Yan Chunwei 已提交
40
## Architecture
X
Xin Pan 已提交
41

Y
Yan Chunwei 已提交
42
Paddle Lite is designed to support a wide range of hardwares and devices, and it enables mixed execution of a single model on multiple devices, optimization on various phases, and leight-weighted applications on devices.
X
Xin Pan 已提交
43

Y
Yan Chunwei 已提交
44
![img](https://github.com/Superjomn/_tmp_images/raw/master/images/paddle-lite-architecture.png)
朔-望's avatar
朔-望 已提交
45

Y
Yan Chunwei 已提交
46
As is shown in the figure above, analysis phase includes Machine IR module, and it enables optimizations like Op fusion and redundant computation pruning. Besides, excecution phase only involves Kernal exevution, so it can be deployed on its own to ensure maximized light-weighted deployment.
朔-望's avatar
朔-望 已提交
47

Y
Yan Chunwei 已提交
48
## Key Info about the Update
朔-望's avatar
朔-望 已提交
49

Y
Yan Chunwei 已提交
50
The earlier Paddle-Mobile was designed to be compatible with PaddlePaddle and multiple hardwares, including ARM CPU, Mali GPU, Adreno GPU, FPGA, ARM-Linux and Apple's GPU Metal. Within Baidu, inc, many product lines have been using Paddle-Mobile. For more details, please see: [mobile/README](mobile/README).
朔-望's avatar
朔-望 已提交
51

Y
Yan Chunwei 已提交
52
As an update of Paddle-Mobile, Paddle Lite has incorporated many older capabilities into the [new architecture](https://github.com/PaddlePaddle/paddle-mobile/tree/develop/lite). For the time being, the code of Paddle-mobile will be kept under the directory `mobile/`, before complete transfer to Paddle Lite.
朔-望's avatar
朔-望 已提交
53

Y
Yan Chunwei 已提交
54
For demands of Apple's GPU Metal and web front end inference, please see `./metal` and `./web` . These two modules will be further developed and maintained.
55

Y
Yan Chunwei 已提交
56
## Special Thanks
X
Xin Pan 已提交
57

Y
Yan Chunwei 已提交
58
Paddle Lite has referenced the following open-source projects:
X
Xin Pan 已提交
59

Y
Yan Chunwei 已提交
60 61
- [ARM compute library](http://agroup.baidu.com/paddle-infer/md/article/%28https://github.com/ARM-software/ComputeLibrary%29)
- [Anakin](https://github.com/PaddlePaddle/Anakin). The optimizations under Anakin has been incorporated into Paddle Lite, and so there will not be any future updates of Anakin. As another high-performance inference project under PaddlePaddle, Anakin has been forward-looking and helpful to the making of Paddle Lite. 
朔-望's avatar
朔-望 已提交
62

Y
Yan Chunwei 已提交
63
## Feedback and Community Support
朔-望's avatar
朔-望 已提交
64

Y
Yan Chunwei 已提交
65 66 67
- Questions, reports, and suggestions are welcome through Github Issues!
- Forum: Opinions and questions are welcome at our [PaddlePaddle Forum](https://ai.baidu.com/forum/topic/list/168)
- QQ group chat: 696965088