提交 208865cb 编写于 作者: T TeslaZhao

Update docs

上级 7d26d3d1
......@@ -25,13 +25,18 @@ PaddleOCR 提供的 PP-OCR 系列模型覆盖轻量级服务端、轻量级移
```
git clone https://github.com/PaddlePaddle/Serving
```
按以下5个步骤操作即可实现 OCR 示例部署。
通过6个步骤操作即可实现 OCR 示例部署。
- 一.获取模型
- 二.保存 Serving 部署的模型参数
- 三.下载测试数据集(可选)
- 四.修改 `config.yml` 配置(可选)
- 五.代码与配置信息绑定
- 六.启动服务与验证
**一.获取模型**
**一.获取模型与保存模型参数**
本章节选用中英文超轻量模型 ch_PP-OCRv2_xx 制作部署案例,模型体积小,效果很好,属于性价比很高的选择。
为了节省大家的时间,已将预训练模型使用[保存模型服务化参数]()方法打包成压缩包,下载并解压即可使用。如你自训练的模型需经过保存模型服务化参数步骤才能服务化部署。
```
python3 -m paddle_serving_app.package --get_model ocr_rec
tar -xzvf ocr_rec.tar.gz
......@@ -39,14 +44,19 @@ python3 -m paddle_serving_app.package --get_model ocr_det
tar -xzvf ocr_det.tar.gz
```
**二.下载测试数据集(可选)**
**二.保存 Serving 部署的模型参数**
为了节省大家的时间,已将预训练模型使用[保存用于 Serving 部署的模型参数](./5-1_Save_Model_Params_CN.md)方法打包成压缩包,下载并解压即可使用。如你自训练的模型需经过保存模型服务化参数步骤才能服务化部署。
**三.下载测试数据集(可选)**
第二步,下载测试图片集,如使用自有测试数据集,可忽略此步骤。
```
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/ocr/test_imgs.tar
tar xf test_imgs.tar
```
**三.修改 Config.yml 配置(可选)**
**四.修改 `config.yml` 配置(可选)**
第三步,通过修改配置文件设置服务、图、OP 级别属性。如果使用默认配置,此步骤可忽略。
由于配置项较多,仅重点介绍部分核心选项的使用,完整配置选项说明可参考[ 配置说明]()
......@@ -155,7 +165,8 @@ op:
#min_subgraph_size: 3
```
**四.代码与配置信息绑定 **
**五.代码与配置信息绑定**
第四步,实现代码和配置文件 Config.yml 绑定,以及设置多模型组合关系。具体包括:
1. 重写模型前后处理:
......@@ -199,7 +210,7 @@ ocr_service.run_service()
```
**.启动服务与验证**
**.启动服务与验证**
启动服务前,可看到程序路径下所有文件路径如下:
```
......
# 保存用于 Paddle Serving 部署的模型参数
# 保存用于 Serving 部署的模型参数
模型参数信息保存在模型文件中,为什么还要保存用于 Paddle Serving 部署的模型参数呢,原因有3个:
......
# 图像分类
## 1.获取模型
```
wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/inference/ResNet50_vd_infer.tar && tar xf ResNet50_vd_infer.tar
```
## 2.用 paddle_serving_client 把下载的推理模型保存用于 Serving 部署的模型参数
```
# 保存 ResNet50_vd 模型参数
python3 -m paddle_serving_client.convert --dirname ./ResNet50_vd_infer/ \
--model_filename inference.pdmodel \
--params_filename inference.pdiparams \
--serving_server ./ResNet50_vd_serving/ \
--serving_client ./ResNet50_vd_client/
```
会在当前文件夹多出 `ResNet50_vd_serving``ResNet50_vd_client` 的文件夹
保存参数后,会在当前文件夹多出 `ResNet50_vd_serving``ResNet50_vd_client` 的文件夹:
```
├── daisy.jpg
├── http_client.py
├── imagenet.label
├── ResNet50_vd_client
│   ├── serving_client_conf.prototxt
│   └── serving_client_conf.stream.prototxt
├── ResNet50_vd_infer
│   ├── inference.pdiparams
│   ├── inference.pdiparams.info
│   └── inference.pdmodel
├── ResNet50_vd_serving
│   ├── fluid_time_file
│   ├── inference.pdiparams
│   ├── inference.pdmodel
│   ├── serving_server_conf.prototxt
│   └── serving_server_conf.stream.prototxt
├── rpc_client.py
```
**三.启动服务**
C++ Serving 服务可以指定一个网络端口同时接收 HTTP、gRPC 和 bRPC 请求。命令参数 `--model` 指定模型路径,`--gpu_ids` 指定 GPU 卡,`--port` 指定端口。
```
python3 -m paddle_serving_server.serve --model ResNet50_vd_serving --gpu_ids 0 --port 9394
```
**四.启动客户端**
1. `rpc_client.py` 封装了 HTTP 请求客户端
```
python3 http_client.py
```
2. `http_client.py` 封装了 gRPC 请求客户端
```
python3 rpc_client.py
```
成功运行后,模型预测的结果会打印如下:
```
prediction: daisy, probability: 0.9341399073600769
```
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from paddle_serving_client import HttpClient
#app
from paddle_serving_app.reader import Sequential, URL2Image, Resize
from paddle_serving_app.reader import CenterCrop, RGB2BGR, Transpose, Div, Normalize
import time
client = HttpClient()
client.load_client_config("./ResNet50_vd_client/serving_client_conf.prototxt")
'''
if you want use GRPC-client, set_use_grpc_client(True)
or you can directly use client.grpc_client_predict(...)
as for HTTP-client,set_use_grpc_client(False)(which is default)
or you can directly use client.http_client_predict(...)
'''
#client.set_use_grpc_client(True)
'''
if you want to enable Encrypt Module,uncommenting the following line
'''
#client.use_key("./key")
'''
if you want to compress,uncommenting the following line
'''
#client.set_response_compress(True)
#client.set_request_compress(True)
'''
we recommend use Proto data format in HTTP-body, set True(which is default)
if you want use JSON data format in HTTP-body, set False
'''
#client.set_http_proto(True)
client.connect(["127.0.0.1:9394"])
label_dict = {}
label_idx = 0
with open("imagenet.label") as fin:
for line in fin:
label_dict[label_idx] = line.strip()
label_idx += 1
#preprocess
seq = Sequential([
URL2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])
start = time.time()
image_file = "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"
for i in range(1):
img = seq(image_file)
res = client.predict(feed={"inputs": img}, fetch=[], batch=False)
if res is None:
raise ValueError("predict error")
if res.err_no != 0:
raise ValueError("predict error. Response : {}".format(res))
max_val = res.outputs[0].tensor[0].float_data[0]
max_idx = 0
for one_data in enumerate(res.outputs[0].tensor[0].float_data):
if one_data[1] > max_val:
max_val = one_data[1]
max_idx = one_data[0]
label = label_dict[max_idx].strip().replace(",", "")
print("prediction: {}, probability: {}".format(label, max_val))
end = time.time()
print(end - start)
此差异已折叠。
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from paddle_serving_client import Client
#app
from paddle_serving_app.reader import Sequential, URL2Image, Resize
from paddle_serving_app.reader import CenterCrop, RGB2BGR, Transpose, Div, Normalize
import time
client = Client()
client.load_client_config("./ResNet50_vd_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9394"])
label_dict = {}
label_idx = 0
with open("imagenet.label") as fin:
for line in fin:
label_dict[label_idx] = line.strip()
label_idx += 1
#preprocess
seq = Sequential([
URL2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])
start = time.time()
image_file = "https://paddle-serving.bj.bcebos.com/imagenet-example/daisy.jpg"
for i in range(1):
img = seq(image_file)
fetch_map = client.predict(feed={"inputs": img}, fetch=[], batch=False)
prob = max(fetch_map["save_infer_model/scale_0.tmp_1"][0])
label = label_dict[fetch_map["save_infer_model/scale_0.tmp_1"][0].tolist()
.index(prob)].strip().replace(",", "")
print("prediction: {}, probability: {}".format(label, prob))
end = time.time()
print(end - start)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册