提交 22fd1962 编写于 作者: Y Youqing Xiaozhua 提交者: Tingquan Gao

Upgrade the English Document

上级 44b229b9
......@@ -14,13 +14,13 @@ After preparing the configuration file, The training process can be started in t
```
python tools/train.py \
-c configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o pretrained_model="" \
-o use_gpu=False
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Arch.pretrained=False \
-o Global.device=gpu
```
Among them, `-c` is used to specify the path of the configuration file, `-o` is used to specify the parameters needed to be modified or added, `-o pretrained_model=""` means to not using pre-trained models.
`-o use_gpu=True` means to use GPU for training. If you want to use the CPU for training, you need to set `use_gpu` to `False`.
Among them, `-c` is used to specify the path of the configuration file, `-o` is used to specify the parameters needed to be modified or added, `-o Arch.pretrained=False` means to not using pre-trained models.
`-o Global.device=gpu` means to use GPU for training. If you want to use the CPU for training, you need to set `Global.device` to `False`.
Of course, you can also directly modify the configuration file to update the configuration. For specific configuration parameters, please refer to [Configuration Document](config_description_en.md).
......@@ -54,12 +54,12 @@ After configuring the configuration file, you can finetune it by loading the pre
```
python tools/train.py \
-c configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o pretrained_model="./pretrained/MobileNetV3_large_x1_0_pretrained" \
-o use_gpu=True
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Arch.pretrained=True \
-o Global.device=gpu
```
Among them, `-o pretrained_model` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
Among them, `-o Arch.pretrained` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
We also provide a lot of pre-trained models trained on the ImageNet-1k dataset. For the model list and download address, please refer to the [model library overview](../models/models_intro_en.md).
......@@ -69,28 +69,26 @@ If the training process is terminated for some reasons, you can also load the ch
```
python tools/train.py \
-c configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o checkpoints="./output/MobileNetV3_large_x1_0/5/ppcls" \
-o last_epoch=5 \
-o use_gpu=True
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Global.checkpoints="./output/MobileNetV3_large_x1_0/epoch_5" \
-o Global.device=gpu
```
The configuration file does not need to be modified. You only need to add the `checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter.
The configuration file does not need to be modified. You only need to add the `Global.checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter.
**Note**:
* The parameter `-o last_epoch=5` means to record the number of the last training epoch as `5`, that is, the number of this training epoch starts from `6`, , and the parameter defaults to `-1`, which means the number of this training epoch starts from `0`.
* The `-o checkpoints` parameter does not need to include the suffix of the checkpoints. The above training command will generate the checkpoints as shown below during the training process. If you want to continue training from the epoch `5`, Just set the `checkpoints` to `./output/MobileNetV3_large_x1_0_gpupaddle/5/ppcls`, PaddleClas will automatically fill in the `pdopt` and `pdparams` suffixes.
* The `-o Global.checkpoints` parameter does not need to include the suffix of the checkpoints. The above training command will generate the checkpoints as shown below during the training process. If you want to continue training from the epoch `5`, Just set the `Global.checkpoints` to `../output/MobileNetV3_large_x1_0/epoch_5`, PaddleClas will automatically fill in the `pdopt` and `pdparams` suffixes.
```shell
output/
── MobileNetV3_large_x1_0
├── 0
│ ├── ppcls.pdopt
│ └── ppcls.pdparams
├── 1
│ ├── ppcls.pdopt
│ └── ppcls.pdparams
output
── MobileNetV3_large_x1_0
│ ├── best_model.pdopt
│ ├── best_model.pdparams
│ ├── best_model.pdstates
│ ├── epoch_1.pdopt
│ ├── epoch_1.pdparams
│ ├── epoch_1.pdstates
.
.
.
......@@ -103,18 +101,15 @@ The model evaluation process can be started as follows.
```bash
python tools/eval.py \
-c ./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o pretrained_model="./output/MobileNetV3_large_x1_0/best_model/ppcls"\
-o load_static_weights=False
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Global.pretrained_model=./output/MobileNetV3_large_x1_0/best_model
```
The above command will use `./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml` as the configuration file to evaluate the model `./output/MobileNetV3_large_x1_0/best_model/ppcls`. You can also set the evaluation by changing the parameters in the configuration file, or you can update the configuration with the `-o` parameter, as shown above.
The above command will use `./configs/quick_start/MobileNetV3_large_x1_0.yaml` as the configuration file to evaluate the model `./output/MobileNetV3_large_x1_0/best_model`. You can also set the evaluation by changing the parameters in the configuration file, or you can update the configuration with the `-o` parameter, as shown above.
Some of the configurable evaluation parameters are described as follows:
* `ARCHITECTURE.name`: Model name
* `pretrained_model`: The path of the model file to be evaluated
* `load_static_weights`: Whether the model to be evaluated is a static graph model
* `Arch.name`: Model name
* `Global.pretrained_model`: The path of the model file to be evaluated
**Note:** If the model is a dygraph type, you only need to specify the prefix of the model file when loading the model, instead of specifying the suffix, such as [1.3 Resume Training](#13-resume-training).
......@@ -125,26 +120,15 @@ If you want to run PaddleClas on Linux with GPU, it is highly recommended to use
### 2.1 Model training
After preparing the configuration file, The training process can be started in the following way. `paddle.distributed.launch` specifies the GPU running card number by setting `selected_gpus`:
After preparing the configuration file, The training process can be started in the following way. `paddle.distributed.launch` specifies the GPU running card number by setting `gpus`:
```bash
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch \
--selected_gpus="0,1,2,3" \
tools/train.py \
-c ./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml
```
The configuration can be updated by adding the `-o` parameter.
```bash
python -m paddle.distributed.launch \
--selected_gpus="0,1,2,3" \
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o pretrained_model="" \
-o use_gpu=True
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml
```
The format of output log information is the same as above, see [1.1 Model training](#11-model-training) for details.
......@@ -156,14 +140,14 @@ After configuring the configuration file, you can finetune it by loading the pre
```
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch \
--selected_gpus="0,1,2,3" \
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o pretrained_model="./pretrained/MobileNetV3_large_x1_0_pretrained"
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Arch.pretrained=True
```
Among them, `pretrained_model` is used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
Among them, `Arch.pretrained` is set to `True` or `False`. It also can be used to set the address to load the pretrained weights. When using it, you need to replace it with your own pretrained weights' path, or you can modify the path directly in the configuration file.
There contains a lot of examples of model finetuning in [Quick Start](./quick_start_en.md). You can refer to this tutorial to finetune the model on a specific dataset.
......@@ -175,26 +159,26 @@ If the training process is terminated for some reasons, you can also load the ch
```
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch \
--selected_gpus="0,1,2,3" \
python3 -m paddle.distributed.launch \
--gpus="0,1,2,3" \
tools/train.py \
-c ./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o checkpoints="./output/MobileNetV3_large_x1_0/5/ppcls" \
-o last_epoch=5 \
-o use_gpu=True
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Global.checkpoints="./output/MobileNetV3_large_x1_0/epoch_5" \
-o Global.device=gpu
```
The configuration file does not need to be modified. You only need to add the `checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter. About `last_epoch` parameter, please refer [1.3 Resume training](#13-resume-training) for details.
The configuration file does not need to be modified. You only need to add the `Global.checkpoints` parameter during training, which represents the path of the checkpoints. The parameter weights, learning rate, optimizer and other information will be loaded using this parameter as described in [1.3 Resume training](#13-resume-training).
### 2.4 Model evaluation
The model evaluation process can be started as follows.
```bash
python tools/eval.py \
-c ./configs/quick_start/MobileNetV3_large_x1_0_finetune.yaml \
-o pretrained_model="./output/MobileNetV3_large_x1_0/best_model/ppcls"\
-o load_static_weights=False
export CUDA_VISIBLE_DEVICES=0,1,2,3
python3 -m paddle.distributed.launch \
tools/eval.py \
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Global.pretrained_model=./output/MobileNetV3_large_x1_0/best_model
```
About parameter description, see [1.4 Model evaluation](#14-model-evaluation) for details.
......@@ -204,30 +188,16 @@ About parameter description, see [1.4 Model evaluation](#14-model-evaluation) fo
After the training is completed, you can predict by using the pre-trained model obtained by the training, as follows:
```python
python tools/infer/infer.py \
-i image path \
--model MobileNetV3_large_x1_0 \
--pretrained_model "./output/MobileNetV3_large_x1_0/best_model/ppcls" \
--use_gpu True \
--load_static_weights False
python3 tools/infer.py \
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Infer.infer_imgs=dataset/flowers102/jpg/image_00001.jpg \
-o Global.pretrained_model=./output/MobileNetV3_large_x1_0/best_model
```
Among them:
+ `image_file`(i): The path of the image file to be predicted, such as `./test.jpeg`;
+ `model`: Model name, such as `MobileNetV3_large_x1_0`;
+ `pretrained_model`: Weight file path, such as `./pretrained/MobileNetV3_large_x1_0_pretrained/`;
+ `use_gpu`: Whether to use the GPU, default by `True`;
+ `load_static_weights`: Whether to load the pre-trained model obtained from static image training, default by `False`;
+ `resize_short`: The length of the shortest side of the image that be scaled proportionally, default by `256`;
+ `resize`: The side length of the image that be center cropped from resize_shorted image, default by `224`;
+ `pre_label_image`: Whether to pre-label the image data, default value: `False`;
+ `pre_label_out_idr`: The output path of pre-labeled image data. When `pre_label_image=True`, a lot of subfolders will be generated under the path, each subfolder represent a category, which stores all the images predicted by the model to belong to the category.
**Note**: If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `resize_short=384`, `resize=384`.
+ `Infer.infer_imgs`: The path of the image file or folder to be predicted;
+ `Global.pretrained_model`: Weight file path, such as `./output/MobileNetV3_large_x1_0/best_model`;
About more detailed infomation, you can refer to [infer.py](../../../tools/infer/infer.py).
<a name="model_inference"></a>
## 4. Use the inference model to predict
PaddlePaddle supports inference using prediction engines, which will be introduced next.
......@@ -235,41 +205,40 @@ PaddlePaddle supports inference using prediction engines, which will be introduc
Firstly, you should export inference model using `tools/export_model.py`.
```bash
python tools/export_model.py \
--model MobileNetV3_large_x1_0 \
--pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \
--output_path ./inference \
--class_dim 1000
python3 tools/export_model.py \
-c ./ppcls/configs/quick_start/MobileNetV3_large_x1_0.yaml \
-o Global.pretrained_model=output/MobileNetV3_large_x1_0/best_model
```
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model, class_dim means number of class for the model, default as 1000.
**Note**:
1. If `--output_path=./inference`, then three files will be generated in the folder `inference`, they are `inference.pdiparams`, `inference.pdmodel` and `inference.pdiparams.info`.
2. You can specify the `shape` of the model input image by setting the parameter `--img_size`, the default is `224`, which means the shape of input image is `224*224`. If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, you need to set `--img_size=384`.
Among them, `Global.pretrained_model` parameter is used to specify the model file path.
The above command will generate the model structure file (`inference.pdmodel`) and the model weight file (`inference.pdiparams`), and then the inference engine can be used for inference:
Go to the deploy directory:
```
cd deploy
```
Execute the command to inference. Cause the default value of `class_id_map_file` is the mapping file of the ImageNet dataset, we set it to `None` here.
```bash
python tools/infer/predict.py \
--image_file image path \
--model_file "./inference/inference.pdmodel" \
--params_file "./inference/inference.pdiparams" \
--use_gpu=True \
--use_tensorrt=False
python3 python/predict_cls.py \
-c configs/inference_cls.yaml \
-o Global.infer_imgs=../dataset/flowers102/jpg/image_00001.jpg \
-o Global.inference_model_dir=../inference/ \
-o PostProcess.Topk.class_id_map_file=None
```
Among them:
+ `image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+ `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/inference.pdmodel`;
+ `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/inference.pdiparams`;
+ `use_tensorrt`: Whether to use the TesorRT, default by `True`;
+ `use_gpu`: Whether to use the GPU, default by `True`
+ `enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.
+ `resize_short`: The length of the shortest side of the image that be scaled proportionally, default by `256`;
+ `resize`: The side length of the image that be center cropped from resize_shorted image, default by `224`;
+ `enable_calc_topk`: Whether to calculate top-k accuracy of the predction, default by `False`. Top-k accuracy will be printed out when set as `True`.
+ `gt_label_path`: Image name and label file, used when `enable_calc_topk` is `True` to get image list and labels.
+ `Global.infer_imgs`: The path of the image file to be predicted;
+ `Global.inference_model_dir`: Model structure file path, such as `../inference/inference.pdmodel`;
+ `Global.use_tensorrt`: Whether to use the TesorRT, default by `False`;
+ `Global.use_gpu`: Whether to use the GPU, default by `True`
+ `Global.enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `Global.use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.
+ `Global.use_fp16`: Whether to enable FP16, default by `False`;
**Note**: If you want to use `Transformer series models`, such as `DeiT_***_384`, `ViT_***_384`, etc., please pay attention to the input size of model, and need to set `resize_short=384`, `resize=384`.
If you want to evaluate the speed of the model, it is recommended to use [predict.py](../../../tools/infer/predict.py), and enable TensorRT to accelerate.
If you want to evaluate the speed of the model, it is recommended to enable TensorRT to accelerate for GPU, and MKL-DNN for CPU.
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册