未验证 提交 e2cdf4ba 编写于 作者: 走神的阿圆's avatar 走神的阿圆 提交者: GitHub

add v2 serving api (#421)

* add v2 serving api
上级 76a02fa7
# coding: utf8
import requests
import json
if __name__ == "__main__":
# 指定用于预测的文本并生成字典{"text": [text_1, text_2, ... ]}
text = {"text": ["今天是个好日子", "天气预报说今天要下雨"]}
# 以key的方式指定text传入预测方法的时的参数,此例中为"data"
# 对应本地部署,则为lac.analysis_lexical(data=text)
data = {"data": text}
# 指定预测方法为lac并发送post请求
url = "http://127.0.0.1:8866/predict/lac"
# 指定post请求的headers为application/json方式
headers = {"Content-Type": "application/json"}
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(json.dumps(r.json(), indent=4, ensure_ascii=False))
...@@ -18,7 +18,8 @@ PaddleHub Serving有两种启动方式,分别是使用命令行启动,以及 ...@@ -18,7 +18,8 @@ PaddleHub Serving有两种启动方式,分别是使用命令行启动,以及
$ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \ $ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \
--port XXXX \ --port XXXX \
--use_gpu \ --use_gpu \
--use_multiprocess --use_multiprocess \
--workers \
``` ```
**参数** **参数**
...@@ -28,7 +29,8 @@ $ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \ ...@@ -28,7 +29,8 @@ $ hub serving start --modules [Module1==Version1, Module2==Version2, ...] \
|--modules/-m|PaddleHub Serving预安装模型,以多个Module==Version键值对的形式列出<br>*`当不指定Version时,默认选择最新版本`*| |--modules/-m|PaddleHub Serving预安装模型,以多个Module==Version键值对的形式列出<br>*`当不指定Version时,默认选择最新版本`*|
|--port/-p|服务端口,默认为8866| |--port/-p|服务端口,默认为8866|
|--use_gpu|使用GPU进行预测,必须安装paddlepaddle-gpu| |--use_gpu|使用GPU进行预测,必须安装paddlepaddle-gpu|
|--use_multiprocess|是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式<br>*`Windows操作系统只支持单进程方式`*| |--use_multiprocess|是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式<br>*`Windows操作系统只支持单进程方式`*|
|--workers|在并发方式下指定的并发任务数,默认为`2*cpu_count-1`,其中`cpu_count`为CPU核数|
#### 配置文件启动 #### 配置文件启动
启动命令 启动命令
...@@ -38,33 +40,45 @@ $ hub serving start --config config.json ...@@ -38,33 +40,45 @@ $ hub serving start --config config.json
`config.json`格式如下: `config.json`格式如下:
```json ```json
{ {
"modules_info": [ "modules_info": {
{ "yolov3_darknet53_coco2017": {
"module": "MODULE_NAME_1", "init_args": {
"version": "MODULE_VERSION_1", "directory": "./my_yolov3",
"batch_size": "BATCH_SIZE_1" "version": "1.0.0"
}, },
{ "predict_args": {
"module": "MODULE_NAME_2", "batch_size": 1,
"version": "MODULE_VERSION_2", "use_gpu": false
"batch_size": "BATCH_SIZE_2" }
},
"lac": {
"init_args": {
"version": "2.1.0",
"user_dict": "./dict.txt"
},
"predict_args": {
"batch_size": 1,
"use_gpu": false
}
} }
], },
"port": 8866, "port": 8866,
"use_gpu": false, "use_multiprocess": false,
"use_multiprocess": false "workers": 2
} }
``` ```
**参数** **参数**
|参数|用途| |参数|用途|
|-|-| |-|-|
|modules_info|PaddleHub Serving预安装模型,以字典列表形式列出,其中:<br>`module`为预测服务使用的模型名<br>`version`为预测模型的版本<br>`batch_size`为预测批次大小 |modules_info|PaddleHub Serving预安装模型,以字典列表形式列出,key为模型名称。其中:<br>`init_args`为模型加载时输入的参数,等同于`paddlehub.Module(**init_args)`<br>`predict_args`为模型预测时输入的参数,以`lac`为例,等同于`lac.analysis_lexical(**predict_args)`
|port|服务端口,默认为8866| |port|服务端口,默认为8866|
|use_gpu|使用GPU进行预测,必须安装paddlepaddle-gpu| |use_gpu|使用GPU进行预测,必须安装paddlepaddle-gpu|
|use_multiprocess|是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式<br>*`Windows操作系统只支持单进程方式`*| |use_multiprocess|是否启用并发方式,默认为单进程方式,推荐多核CPU机器使用此方式<br>*`Windows操作系统只支持单进程方式`*|
|workers|启动的并发任务数,在并发模式下才生效,默认为`2*cpu_count-1`,其中`cpu_count`代表CPU的核数|
### Step2:访问服务端 ### Step2:访问服务端
...@@ -76,6 +90,7 @@ http://0.0.0.0:8866/predict/<CATEGORY\>/\<MODULE> ...@@ -76,6 +90,7 @@ http://0.0.0.0:8866/predict/<CATEGORY\>/\<MODULE>
通过发送一个POST请求,即可获取预测结果,下面我们将展示一个具体的demo,以说明使用PaddleHub Serving部署和使用流程。 通过发送一个POST请求,即可获取预测结果,下面我们将展示一个具体的demo,以说明使用PaddleHub Serving部署和使用流程。
### Step3:利用PaddleHub Serving进行个性化开发 ### Step3:利用PaddleHub Serving进行个性化开发
使用PaddleHub Serving进行模型服务部署后,可以利用得到的接口进行开发,如对外提供web服务,或接入到应用程序中,以降低客户端预测压力,提高性能,下面展示了一个web页面demo: 使用PaddleHub Serving进行模型服务部署后,可以利用得到的接口进行开发,如对外提供web服务,或接入到应用程序中,以降低客户端预测压力,提高性能,下面展示了一个web页面demo:
...@@ -85,6 +100,17 @@ http://0.0.0.0:8866/predict/<CATEGORY\>/\<MODULE> ...@@ -85,6 +100,17 @@ http://0.0.0.0:8866/predict/<CATEGORY\>/\<MODULE>
</p> </p>
### Step4:关闭serving
使用关闭命令即可关闭启动的serving,
```shell
$ hub serving stop --port XXXX
```
**参数**
|参数|用途|
|-|-|
|--port/-p|指定要关闭的服务端口,默认为8866|
## Demo——部署一个在线lac分词服务 ## Demo——部署一个在线lac分词服务
### Step1:部署lac在线服务 ### Step1:部署lac在线服务
...@@ -171,6 +197,22 @@ if __name__ == "__main__": ...@@ -171,6 +197,22 @@ if __name__ == "__main__":
} }
``` ```
### Step3:停止serving服务
由于启动时我们使用了默认的服务端口8866,则对应的关闭命令为:
```shell
$ hub serving stop --port 8866
```
或不指定关闭端口,则默认为8866。
```shell
$ hub serving stop
```
等待serving清理服务后,提示:
```shell
$ PaddleHub Serving will stop.
```
则serving服务已经停止。
此Demo的具体信息和代码请参见[LAC Serving](../../demo/serving/module_serving/lexical_analysis_lac)。另外,下面展示了一些其他的一键服务部署Demo。 此Demo的具体信息和代码请参见[LAC Serving](../../demo/serving/module_serving/lexical_analysis_lac)。另外,下面展示了一些其他的一键服务部署Demo。
## Demo——其他模型的一键部署服务 ## Demo——其他模型的一键部署服务
...@@ -209,5 +251,60 @@ if __name__ == "__main__": ...@@ -209,5 +251,60 @@ if __name__ == "__main__":
&emsp;&emsp;该示例展示了利用senta_lstm完成中文文本情感分析服务化部署和在线预测,获取文本的情感分析结果。 &emsp;&emsp;该示例展示了利用senta_lstm完成中文文本情感分析服务化部署和在线预测,获取文本的情感分析结果。
## 客户端请求新版模型的方式
对某些新版模型,客户端请求方式有所变化,更接近本地预测的请求方式,以降低学习成本。
以lac(2.1.0)为例,使用上述方法进行请求将提示:
```python
{
"Warnning": "This usage is out of date, please use 'application/json' as content-type to post to /predict/lac. See 'https://github.com/PaddlePaddle/PaddleHub/blob/release/v1.5/docs/tutorial/serving.md' for more details."
}
```
对于lac(2.1.0),请求的方式如下:
```python
# coding: utf8
import requests
import json
if __name__ == "__main__":
# 指定用于预测的文本并生成字典{"text": [text_1, text_2, ... ]}
text = {"text": ["今天是个好日子", "天气预报说今天要下雨"]}
# 以key的方式指定text传入预测方法的时的参数,此例中为"data"
# 对应本地部署,则为lac.analysis_lexical(data=text)
data = {"data": text}
# 指定预测方法为lac并发送post请求
url = "http://127.0.0.1:8866/predict/porn_detection_gru"
# 指定post请求的headers为application/json方式
headers = {"Content-Type": "application/json"}
r = requests.post(url=url, headers=headers, data=json.dumps(data))
# 打印预测结果
print(json.dumps(r.json(), indent=4, ensure_ascii=False))
```
对结果的解析等与前种方式一致,显示如下:
```python
{
"results": [
{
"tag": [
"TIME", "v", "q", "n"
],
"word": [
"今天", "是", "个", "好日子"
]
},
{
"tag": [
"n", "v", "TIME", "v", "v"
],
"word": [
"天气预报", "说", "今天", "要", "下雨"
]
}
]
}
```
此Demo的具体信息和代码请参见[LAC Serving_2.1.0](../../demo/serving/module_serving/lexical_analysis_lac)
## Bert Service ## Bert Service
除了预训练模型一键服务部署功能之外,PaddleHub Serving还具有`Bert Service`功能,支持ernie_tiny、bert等模型快速部署,对外提供可靠的在线embedding服务,具体信息请参见[Bert Service](./bert_service.md) 除了预训练模型一键服务部署功能之外,PaddleHub Serving还具有`Bert Service`功能,支持ernie_tiny、bert等模型快速部署,对外提供可靠的在线embedding服务,具体信息请参见[Bert Service](./bert_service.md)
...@@ -202,7 +202,7 @@ class ServingCommand(BaseCommand): ...@@ -202,7 +202,7 @@ class ServingCommand(BaseCommand):
module=key, module=key,
version=init_args.get("version", "0.0.0")).start() version=init_args.get("version", "0.0.0")).start()
if "dir" not in init_args: if "directory" not in init_args:
init_args.update({"name": key}) init_args.update({"name": key})
m = hub.Module(**init_args) m = hub.Module(**init_args)
method_name = m.serving_func_name method_name = m.serving_func_name
...@@ -413,11 +413,11 @@ class ServingCommand(BaseCommand): ...@@ -413,11 +413,11 @@ class ServingCommand(BaseCommand):
except: except:
ServingCommand.show_help() ServingCommand.show_help()
return False return False
self.link_module_info()
if self.args.sub_command == "start": if self.args.sub_command == "start":
if self.args.bert_service == "bert_service": if self.args.bert_service == "bert_service":
ServingCommand.start_bert_serving(self.args) ServingCommand.start_bert_serving(self.args)
elif self.args.bert_service is None: elif self.args.bert_service is None:
self.link_module_info()
self.start_serving() self.start_serving()
else: else:
ServingCommand.show_help() ServingCommand.show_help()
......
...@@ -18,6 +18,9 @@ from __future__ import division ...@@ -18,6 +18,9 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from paddlehub.common.utils import is_windows from paddlehub.common.utils import is_windows
from paddlehub.common.utils import sort_version_key
from paddlehub.common.utils import strflist_version
from functools import cmp_to_key
linux_color_dict = { linux_color_dict = {
"white": "\033[1;37m%s\033[0m", "white": "\033[1;37m%s\033[0m",
...@@ -146,3 +149,27 @@ class TablePrinter(object): ...@@ -146,3 +149,27 @@ class TablePrinter(object):
def get_text(self): def get_text(self):
self.add_horizontal_line() self.add_horizontal_line()
return self.text return self.text
def paint_modules_info(module_versions_info):
if is_windows():
placeholders = [20, 8, 14, 14]
else:
placeholders = [30, 8, 16, 16]
tp = TablePrinter(
titles=["ResourceName", "Version", "PaddlePaddle", "PaddleHub"],
placeholders=placeholders)
module_versions_info.sort(key=cmp_to_key(sort_version_key))
for resource_name, resource_version, paddle_version, \
hub_version in module_versions_info:
colors = ["yellow", None, None, None]
tp.add_line(
contents=[
resource_name, resource_version,
strflist_version(paddle_version),
strflist_version(hub_version)
],
colors=colors)
return tp.get_text()
...@@ -56,7 +56,8 @@ def version_compare(version1, version2): ...@@ -56,7 +56,8 @@ def version_compare(version1, version2):
def base64s_to_cvmats(base64s): def base64s_to_cvmats(base64s):
for index, value in enumerate(base64s): for index, value in enumerate(base64s):
value = bytes(value, encoding="utf8") if isinstance(value, str):
value = bytes(value, encoding="utf8")
value = base64.b64decode(value) value = base64.b64decode(value)
value = np.fromstring(value, np.uint8) value = np.fromstring(value, np.uint8)
value = cv2.imdecode(value, 1) value = cv2.imdecode(value, 1)
...@@ -65,6 +66,16 @@ def base64s_to_cvmats(base64s): ...@@ -65,6 +66,16 @@ def base64s_to_cvmats(base64s):
return base64s return base64s
def cvmats_to_base64s(cvmats):
for index, value in enumerate(cvmats):
retval, buffer = cv2.imencode('.jpg', value)
pic_str = base64.b64encode(buffer)
value = pic_str.decode()
cvmats[index] = value
return cvmats
def handle_mask_results(results, data_len): def handle_mask_results(results, data_len):
result = [] result = []
if len(results) <= 0 and data_len != 0: if len(results) <= 0 and data_len != 0:
......
...@@ -30,10 +30,12 @@ import paddlehub as hub ...@@ -30,10 +30,12 @@ import paddlehub as hub
from paddlehub.common import utils from paddlehub.common import utils
from paddlehub.common.downloader import default_downloader from paddlehub.common.downloader import default_downloader
from paddlehub.common.dir import MODULE_HOME from paddlehub.common.dir import MODULE_HOME
from paddlehub.common.cml_utils import TablePrinter from paddlehub.common.cml_utils import paint_modules_info
from paddlehub.common.logger import logger from paddlehub.common.logger import logger
from paddlehub.common import tmp_dir from paddlehub.common import tmp_dir
from paddlehub.module import module_desc_pb2 from paddlehub.module import module_desc_pb2
from paddlehub.version import hub_version as sys_hub_verion
from paddle import __version__ as sys_paddle_version
class LocalModuleManager(object): class LocalModuleManager(object):
...@@ -135,40 +137,20 @@ class LocalModuleManager(object): ...@@ -135,40 +137,20 @@ class LocalModuleManager(object):
return False, tips, None return False, tips, None
module_versions_info = hub.HubServer().search_module_info( module_versions_info = hub.HubServer().search_module_info(
module_name) module_name)
if module_versions_info is not None and len( if module_versions_info is None:
module_versions_info) > 0: tips = "Can't find module %s, please check your spelling." \
% (module_name)
if utils.is_windows(): elif module_version is not None and module_version not in [
placeholders = [20, 8, 14, 14] item[1] for item in module_versions_info
else: ]:
placeholders = [30, 8, 16, 16] tips = "Can't find module %s with version %s, all versions are listed below." \
tp = TablePrinter( % (module_name, module_version)
titles=[ tips += paint_modules_info(module_versions_info)
"ResourceName", "Version", "PaddlePaddle",
"PaddleHub"
],
placeholders=placeholders)
module_versions_info.sort(
key=cmp_to_key(utils.sort_version_key))
for resource_name, resource_version, paddle_version, \
hub_version in module_versions_info:
colors = ["yellow", None, None, None]
tp.add_line(
contents=[
resource_name, resource_version,
utils.strflist_version(paddle_version),
utils.strflist_version(hub_version)
],
colors=colors)
tips = "The version of PaddlePaddle or PaddleHub " \
"can not match module, please upgrade your " \
"PaddlePaddle or PaddleHub according to the form " \
"below." + tp.get_text()
else: else:
tips = "Can't find module %s" % module_name tips = "The version of PaddlePaddle(%s) or PaddleHub(%s) can not match module, please upgrade your PaddlePaddle or PaddleHub according to the form below." \
if module_version: % (sys_paddle_version, sys_hub_verion)
tips += " with version %s" % module_version tips += paint_modules_info(module_versions_info)
return False, tips, None return False, tips, None
result, tips, module_zip_file = default_downloader.download_file( result, tips, module_zip_file = default_downloader.download_file(
......
...@@ -36,6 +36,20 @@ def predict_v2(module_info, input): ...@@ -36,6 +36,20 @@ def predict_v2(module_info, input):
return {"results": output} return {"results": output}
def predict_v2_advanced(module_info, input):
serving_method_name = module_info["method_name"]
serving_method = getattr(module_info["module"], serving_method_name)
predict_args = module_info["predict_args"]
predict_args.update(input)
for item in serving_method.__code__.co_varnames:
if item in module_info.keys():
predict_args.update({item: module_info[item]})
output = serving_method(**predict_args)
return {"results": output}
def predict_nlp(module_info, input_text, req_id, extra=None): def predict_nlp(module_info, input_text, req_id, extra=None):
method_name = module_info["method_name"] method_name = module_info["method_name"]
predict_method = getattr(module_info["module"], method_name) predict_method = getattr(module_info["module"], method_name)
...@@ -318,6 +332,20 @@ def create_app(init_flag=False, configs=None): ...@@ -318,6 +332,20 @@ def create_app(init_flag=False, configs=None):
def predict_image(module_name): def predict_image(module_name):
if request.path.split("/")[-1] not in cv_module_info.modules_info: if request.path.split("/")[-1] not in cv_module_info.modules_info:
return {"error": "Module {} is not available.".format(module_name)} return {"error": "Module {} is not available.".format(module_name)}
module_info = cv_module_info.get_module_info(module_name)
if module_info["code_version"] == "v2":
results = {}
# results = predict_v2(module_info, inputs)
results.update({
"Warnning":
"This usage is out of date, please "
"use 'application/json' as "
"content-type to post to "
"/predict/%s. See "
"'https://github.com/PaddlePaddle/PaddleHub/blob/release/v1.5/docs/tutorial/serving.md' for more details."
% (module_name)
})
return results
req_id = request.data.get("id") req_id = request.data.get("id")
img_base64 = request.form.getlist("image") img_base64 = request.form.getlist("image")
extra_info = {} extra_info = {}
...@@ -354,7 +382,6 @@ def create_app(init_flag=False, configs=None): ...@@ -354,7 +382,6 @@ def create_app(init_flag=False, configs=None):
# module = default_module_manager.get_module(module_name) # module = default_module_manager.get_module(module_name)
# predict_func_name = cv_module_info.get_module_info(module_name)[ # predict_func_name = cv_module_info.get_module_info(module_name)[
# "method_name"] # "method_name"]
module_info = cv_module_info.get_module_info(module_name)
module = module_info["module"] module = module_info["module"]
predict_func_name = cv_module_info.cv_module_method.get(module_name, "") predict_func_name = cv_module_info.cv_module_method.get(module_name, "")
if predict_func_name != "": if predict_func_name != "":
...@@ -374,6 +401,20 @@ def create_app(init_flag=False, configs=None): ...@@ -374,6 +401,20 @@ def create_app(init_flag=False, configs=None):
def predict_text(module_name): def predict_text(module_name):
if request.path.split("/")[-1] not in nlp_module_info.nlp_modules: if request.path.split("/")[-1] not in nlp_module_info.nlp_modules:
return {"error": "Module {} is not available.".format(module_name)} return {"error": "Module {} is not available.".format(module_name)}
module_info = nlp_module_info.get_module_info(module_name)
if module_info["code_version"] == "v2":
results = {}
# results = predict_v2(module_info, inputs)
results.update({
"Warnning":
"This usage is out of date, please "
"use 'application/json' as "
"content-type to post to "
"/predict/%s. See "
"'https://github.com/PaddlePaddle/PaddleHub/blob/release/v1.5/docs/tutorial/serving.md' for more details."
% (module_name)
})
return results
req_id = request.data.get("id") req_id = request.data.get("id")
inputs = {} inputs = {}
for item in list(request.form.keys()): for item in list(request.form.keys()):
...@@ -385,16 +426,24 @@ def create_app(init_flag=False, configs=None): ...@@ -385,16 +426,24 @@ def create_app(init_flag=False, configs=None):
file_name = req_id + "_" + file.filename file_name = req_id + "_" + file.filename
files[file_key].append(file_name) files[file_key].append(file_name)
file.save(file_name) file.save(file_name)
module_info = nlp_module_info.get_module_info(module_name)
if module_info["code_version"] == "v2": results = predict_nlp(
results = predict_v2(module_info, inputs) module_info=module_info,
input_text=inputs,
req_id=req_id,
extra=files)
return results
@app_instance.route("/predict/<module_name>", methods=["POST"])
def predict_modulev2(module_name):
if module_name in nlp_module_info.nlp_modules:
module_info = nlp_module_info.get_module_info(module_name)
elif module_name in cv_module_info.cv_modules:
module_info = cv_module_info.get_module_info(module_name)
else: else:
results = predict_nlp( return {"Error": "Module {} is not available.".format(module_name)}
module_info=module_info, inputs = request.json
input_text=inputs, results = predict_v2_advanced(module_info, inputs)
req_id=req_id,
extra=files)
return results return results
return app_instance return app_instance
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册