未验证 提交 e7dbecd2 编写于 作者: L littletomatodonkey 提交者: GitHub

fix predict (#527)

* fix predict

* fix export model

* fix doc
上级 5854b7c6
......@@ -44,7 +44,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench
After the PC and mobile phone are successfully connected, use the following command to start the model evaluation.
```
sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
```
Where `./benchmark_bin_v8` is the path of the benchmark binary file, `./inference` is the path of all the models that need to be evaluated, `result_armv8.txt` is the result file, and the final parameter `true` means that the model will be optimized before evaluation. Eventually, the evaluation result file of `result_armv8.txt` will be saved in the current folder. The specific performances are as follows.
......
......@@ -226,13 +226,13 @@ Firstly, you should export inference model using `tools/export_model.py`.
python tools/export_model.py \
--model MobileNetV3_large_x1_0 \
--pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \
--output_path ./inference/cls_infer
--output_path ./inference
```
Among them, the `--model` parameter is used to specify the model name, `--pretrained_model` parameter is used to specify the model file path, the path does not need to include the model file suffix name, and `--output_path` is used to specify the storage path of the converted model.
**Note**:
1. File prefix must be assigned in `--output_path`. If `--output_path=./inference/cls_infer`, then three files will be generated in the folder `inference`, they are `cls_infer.pdiparams`, `cls_infer.pdmodel` and `cls_infer.pdiparams.info`.
1. If `--output_path=./inference`, then three files will be generated in the folder `inference`, they are `inference.pdiparams`, `inference.pdmodel` and `inference.pdiparams.info`.
2. In the file `export_model.py:line53`, the `shape` parameter is the shape of the model input image, the default is `224*224`. Please modify it according to the actual situation, as shown below:
```python
......@@ -243,20 +243,20 @@ Among them, the `--model` parameter is used to specify the model name, `--pretra
54 ])
```
The above command will generate the model structure file (`cls_infer.pdmodel`) and the model weight file (`cls_infer.pdiparams`), and then the inference engine can be used for inference:
The above command will generate the model structure file (`inference.pdmodel`) and the model weight file (`inference.pdiparams`), and then the inference engine can be used for inference:
```bash
python tools/infer/predict.py \
--image_file image path \
--model_file "./inference/cls_infer.pdmodel" \
--params_file "./inference/cls_infer.pdiparams" \
--model_file "./inference/inference.pdmodel" \
--params_file "./inference/inference.pdiparams" \
--use_gpu=True \
--use_tensorrt=False
```
Among them:
+ `image_file`: The path of the image file to be predicted, such as `./test.jpeg`;
+ `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdmodel`;
+ `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/cls_infer.pdiparams`;
+ `model_file`: Model file path, such as `./MobileNetV3_large_x1_0/inference.pdmodel`;
+ `params_file`: Weight file path, such as `./MobileNetV3_large_x1_0/inference.pdiparams`;
+ `use_tensorrt`: Whether to use the TesorRT, default by `True`;
+ `use_gpu`: Whether to use the GPU, default by `True`
+ `enable_mkldnn`: Wheter to use `MKL-DNN`, default by `False`. When both `use_gpu` and `enable_mkldnn` are set to `True`, GPU is used to run and `enable_mkldnn` will be ignored.
......
......@@ -45,7 +45,7 @@ wget -c https://paddle-inference-dist.bj.bcebos.com/PaddleLite/benchmark_0/bench
PC端和手机连接成功后,使用下面的命令开始模型评估。
```
sh tools/lite/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
sh deploy/lite/benchmark/benchmark.sh ./benchmark_bin_v8 ./inference result_armv8.txt true
```
其中`./benchmark_bin_v8`为benchmark二进制文件路径,`./inference`为所有需要评测的模型的路径,`result_armv8.txt`为保存的结果文件,最后的参数`true`表示在评估之后会首先进行模型优化。最终在当前文件夹下会输出`result_armv8.txt`的评估结果文件,具体信息如下。
......
......@@ -238,13 +238,13 @@ python tools/infer/infer.py \
python tools/export_model.py \
--model MobileNetV3_large_x1_0 \
--pretrained_model ./output/MobileNetV3_large_x1_0/best_model/ppcls \
--output_path ./inference/cls_infer
--output_path ./inference
```
其中,参数`--model`用于指定模型名称,`--pretrained_model`用于指定模型文件路径,该路径仍无需包含模型文件后缀名(如[1.3 模型恢复训练](#1.3)),`--output_path`用于指定转换后模型的存储路径。
**注意**
1. `--output_path`中必须指定文件名的前缀,若`--output_path=./inference/cls_infer`,则会在`inference`文件夹下生成`cls_infer.pdiparams``cls_infer.pdmodel``cls_infer.pdiparams.info`文件。
1. `--output_path`表示输出的inference模型文件夹路径,若`--output_path=./inference`,则会在`inference`文件夹下生成`inference.pdiparams``inference.pdmodel``inference.pdiparams.info`文件。
2. 文件`export_model.py:line53`中,`shape`参数为模型输入图像的`shape`,默认为`224*224`,请根据实际情况修改,如下所示:
```python
50 # Please modify the 'shape' according to actual needs
......@@ -254,20 +254,20 @@ python tools/export_model.py \
54 ])
```
上述命令将生成模型结构文件(`cls_infer.pdmodel`)和模型权重文件(`cls_infer.pdiparams`),然后可以使用预测引擎进行推理:
上述命令将生成模型结构文件(`inference.pdmodel`)和模型权重文件(`inference.pdiparams`),然后可以使用预测引擎进行推理:
```bash
python tools/infer/predict.py \
--image_file 图片路径 \
--model_file "./inference/cls_infer.pdmodel" \
--params_file "./inference/cls_infer.pdiparams" \
--model_file "./inference/inference.pdmodel" \
--params_file "./inference/inference.pdiparams" \
--use_gpu=True \
--use_tensorrt=False
```
其中:
+ `image_file`:待预测的图片文件路径,如 `./test.jpeg`
+ `model_file`:模型结构文件路径,如 `./inference/cls_infer.pdmodel`
+ `params_file`:模型权重文件路径,如 `./inference/cls_infer.pdiparams`
+ `model_file`:模型结构文件路径,如 `./inference/inference.pdmodel`
+ `params_file`:模型权重文件路径,如 `./inference/inference.pdiparams`
+ `use_tensorrt`:是否使用 TesorRT 预测引擎,默认值:`True`
+ `use_gpu`:是否使用 GPU 预测,默认值:`True`
+ `enable_mkldnn`:是否启用`MKL-DNN`加速,默认为`False`。注意`enable_mkldnn``use_gpu`同时为`True`时,将忽略`enable_mkldnn`,而使用GPU运行。
......
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle
import numpy as np
......
......@@ -33,8 +33,7 @@ def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--model", type=str)
parser.add_argument("-p", "--pretrained_model", type=str)
parser.add_argument(
"-o", "--output_path", type=str, default="./inference/cls_infer")
parser.add_argument("-o", "--output_path", type=str, default="./inference")
parser.add_argument("--class_dim", type=int, default=1000)
parser.add_argument("--load_static_weights", type=str2bool, default=False)
parser.add_argument("--img_size", type=int, default=224)
......@@ -73,7 +72,7 @@ def main():
paddle.static.InputSpec(
shape=[None, 3, args.img_size, args.img_size], dtype='float32')
])
paddle.jit.save(model, args.output_path)
paddle.jit.save(model, os.path.join(args.output_path, "inference"))
if __name__ == "__main__":
......
......@@ -14,19 +14,21 @@
import numpy as np
import cv2
import utils
import shutil
import os
import sys
import paddle
import paddle.nn.functional as F
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
from ppcls.utils.save_load import load_dygraph_pretrain
from ppcls.modeling import architectures
import paddle
import paddle.nn.functional as F
import utils
from utils import get_image_list
def postprocess(outputs, topk=5):
......@@ -36,23 +38,6 @@ def postprocess(outputs, topk=5):
return zip(index, prob[index])
def get_image_list(img_file):
imgs_lists = []
if img_file is None or not os.path.exists(img_file):
raise Exception("not found any img file in {}".format(img_file))
img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp']
if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end:
imgs_lists.append(img_file)
elif os.path.isdir(img_file):
for single_file in os.listdir(img_file):
if single_file.split('.')[-1] in img_end:
imgs_lists.append(os.path.join(img_file, single_file))
if len(imgs_lists) == 0:
raise Exception("not found any img file in {}".format(img_file))
return imgs_lists
def save_prelabel_results(class_id, input_filepath, output_idr):
output_dir = os.path.join(output_idr, str(class_id))
if not os.path.isdir(output_dir):
......
......@@ -12,13 +12,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
sys.path.insert(0, ".")
import tools.infer.utils as utils
import numpy as np
import cv2
import time
import sys
sys.path.insert(0, ".")
import tools.infer.utils as utils
from tools.infer.utils import get_image_list
def predict(args, predictor):
input_names = predictor.get_input_names()
......@@ -32,27 +34,33 @@ def predict(args, predictor):
if not args.enable_benchmark:
# for PaddleHubServing
if args.hubserving:
img = args.image_file
img_list = [args.image_file]
# for predict only
else:
img = cv2.imread(args.image_file)[:, :, ::-1]
assert img is not None, "Error in loading image: {}".format(
args.image_file)
inputs = utils.preprocess(img, args)
inputs = np.expand_dims(
inputs, axis=0).repeat(
args.batch_size, axis=0).copy()
input_tensor.copy_from_cpu(inputs)
predictor.run()
output = output_tensor.copy_to_cpu()
classes, scores = utils.postprocess(output, args)
if args.hubserving:
return classes, scores
print("Current image file: {}".format(args.image_file))
print("\ttop-1 class: {0}".format(classes[0]))
print("\ttop-1 score: {0}".format(scores[0]))
img_list = get_image_list(args.image_file)
for idx, img_name in enumerate(img_list):
if not args.hubserving:
img = cv2.imread(img_name)[:, :, ::-1]
assert img is not None, "Error in loading image: {}".format(
img_name)
else:
img = img_name
inputs = utils.preprocess(img, args)
inputs = np.expand_dims(
inputs, axis=0).repeat(
args.batch_size, axis=0).copy()
input_tensor.copy_from_cpu(inputs)
predictor.run()
output = output_tensor.copy_to_cpu()
classes, scores = utils.postprocess(output, args)
if args.hubserving:
return classes, scores
print("Current image file: {}".format(img_name))
print("\ttop-1 class: {0}".format(classes[0]))
print("\ttop-1 score: {0}".format(scores[0]))
else:
for i in range(0, test_num + 10):
inputs = np.random.rand(args.batch_size, 3, 224,
......
......@@ -12,9 +12,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import argparse
import cv2
import numpy as np
from paddle.inference import Config
from paddle.inference import create_predictor
......@@ -125,6 +127,23 @@ def postprocess(output, args):
return classes, scores
def get_image_list(img_file):
imgs_lists = []
if img_file is None or not os.path.exists(img_file):
raise Exception("not found any img file in {}".format(img_file))
img_end = ['jpg', 'png', 'jpeg', 'JPEG', 'JPG', 'bmp']
if os.path.isfile(img_file) and img_file.split('.')[-1] in img_end:
imgs_lists.append(img_file)
elif os.path.isdir(img_file):
for single_file in os.listdir(img_file):
if single_file.split('.')[-1] in img_end:
imgs_lists.append(os.path.join(img_file, single_file))
if len(imgs_lists) == 0:
raise Exception("not found any img file in {}".format(img_file))
return imgs_lists
class ResizeImage(object):
def __init__(self, resize_short=None):
self.resize_short = resize_short
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册