README_en.md 5.4 KB
Newer Older
Y
yukavio 已提交
1

L
LDOUBLEV 已提交
2
## Introduction
Y
yukavio 已提交
3

L
LDOUBLEV 已提交
4
Complicated models help to improve the performance of the model, but it also leads to some redundancy in the model. Model tailoring reduces this redundancy by removing the sub-models in the network model, so as to reduce model calculation complexity and improve model inference performance. .
Y
yukavio 已提交
5

L
LDOUBLEV 已提交
6
This tutorial will introduce how to use PaddleSlim to crop PaddleOCR model.
Y
yukavio 已提交
7

L
LDOUBLEV 已提交
8 9 10 11
It is recommended that you could understand following pages before reading this example:
1. [PaddleOCR training methods](../../../doc/doc_ch/quickstart.md)
2. [The demo of prune](https://paddlepaddle.github.io/PaddleSlim/tutorials/pruning_tutorial/)
3. [PaddleSlim prune API](https://paddlepaddle.github.io/PaddleSlim/api/prune_api/)
Y
yukavio 已提交
12

L
LDOUBLEV 已提交
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
## Quick start

Five steps for OCR model prune:
1. Install PaddleSlim
2. Prepare the trained model
3. Sensitivity analysis and training
4. Model tailoring training
5. Export model, predict deployment

### 1. Install PaddleSlim

```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim
python setup.py install
```
Y
yukavio 已提交
29 30


L
LDOUBLEV 已提交
31 32 33
### 2. Download Pretrain Model
Model prune needs to load pre-trained models.
PaddleOCR also provides a series of models [../../../doc/doc_en/models_list_en.md]. Developers can choose their own models or use their own models according to their needs.
Y
yukavio 已提交
34 35


L
LDOUBLEV 已提交
36
### 3. Pruning sensitivity analysis
Y
yukavio 已提交
37

Y
yukavio 已提交
38 39 40 41 42 43 44 45 46 47 48 49 50 51
  After the pre-training model is loaded, sensitivity analysis is performed on each network layer of the model to understand the redundancy of each network layer, and save a sensitivity file which named: sensitivities_0.data.  After that, user could load the sensitivity file via the [methods provided by PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/paddleslim/prune/sensitive.py#L221) and determining the pruning ratio of each network layer automatically. For specific details of sensitivity analysis, see:[Sensitivity analysis](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/zh_cn/tutorials/image_classification_sensitivity_analysis_tutorial.md)
  The data format of sensitivity file:
      sensitivities_0.data(Dict){
              'layer_weight_name_0': sens_of_each_ratio(Dict){'pruning_ratio_0': acc_loss, 'pruning_ratio_1': acc_loss}
              'layer_weight_name_1': sens_of_each_ratio(Dict){'pruning_ratio_0': acc_loss, 'pruning_ratio_1': acc_loss}
          }

      example:
          {
              'conv10_expand_weights': {0.1: 0.006509952684312718, 0.2: 0.01827734339798862, 0.3: 0.014528405644659832, 0.6: 0.06536008804270439, 0.8: 0.11798612250664964, 0.7: 0.12391408417493704, 0.4: 0.030615754498018757, 0.5: 0.047105205602406594}
              'conv10_linear_weights': {0.1: 0.05113190831455035, 0.2: 0.07705573833558801, 0.3: 0.12096721757739311, 0.6: 0.5135061352930738, 0.8: 0.7908166677143281, 0.7: 0.7272187676899062, 0.4: 0.1819252083008504, 0.5: 0.3728054727792405}
          }
  The function would return a dict after loading the sensitivity file. The keys of the dict are name of parameters in each layer. And the value of key is the information about pruning sensitivity of correspoding layer. In example, pruning 10% filter of the layer corresponding to conv10_expand_weights would lead to 0.65% degradation of model performance. The details could be seen at: [Sensitivity analysis](https://github.com/PaddlePaddle/PaddleSlim/blob/develop/docs/zh_cn/algo/algo.md#2-%E5%8D%B7%E7%A7%AF%E6%A0%B8%E5%89%AA%E8%A3%81%E5%8E%9F%E7%90%86)

Y
yukavio 已提交
52 53 54

Enter the PaddleOCR root directory,perform sensitivity analysis on the model with the following command:

Y
yukavio 已提交
55
```bash
Y
yukavio 已提交
56 57 58

python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1

Y
yukavio 已提交
59
```
Y
yukavio 已提交
60 61 62



L
LDOUBLEV 已提交
63
### 4. Model pruning and Fine-tune
Y
yukavio 已提交
64 65 66

  When pruning, the previous sensitivity analysis file would determines the pruning ratio of each network layer. In the specific implementation, in order to retain as many low-level features extracted from the image as possible, we skipped the 4 convolutional layers close to the input in the backbone. Similarly, in order to reduce the model performance loss caused by pruning, we selected some of the less redundant and more sensitive [network layer](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41) through the sensitivity table obtained from the previous sensitivity analysis.And choose to skip these network layers in the subsequent pruning process. After pruning, the model need a finetune process to recover the performance and the training strategy of finetune is similar to the strategy of training original OCR detection model.

Y
yukavio 已提交
67
```bash
Y
yukavio 已提交
68 69 70

python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1

Y
yukavio 已提交
71
```
Y
yukavio 已提交
72 73


L
LDOUBLEV 已提交
74
### 5.  Export inference model and deploy it
Y
yukavio 已提交
75

L
LDOUBLEV 已提交
76
We can export the pruned model as inference_model for deployment:
Y
yukavio 已提交
77
```bash
Y
yukavio 已提交
78
python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model
Y
yukavio 已提交
79
```
L
LDOUBLEV 已提交
80 81 82 83 84

Reference for prediction and deployment of inference model:
1. [inference model python prediction](../../../doc/doc_en/inference_en.md)
2. [inference model C++ prediction](../../cpp_infer/readme_en.md)
3. [Deployment of inference model on mobile](../../lite/readme_en.md)