提交 088dc39b 编写于 作者: H HexToString

fix doc for ocr

上级 e403c918
......@@ -117,6 +117,21 @@ make -j10
you can execute `make install` to put targets under directory `./output`, you need to add`-DCMAKE_INSTALL_PREFIX=./output`to specify output path to cmake command shown above.
### Compile C++ Server under the condition of WITH_OPENCV=ON
First of all , opencv library should be installed, if not, please refer to the `Compile and install opencv` section later in this article.
In the compile command, add `DOPENCV_DIR=${OPENCV_DIR}` and `DWITH_OPENCV=ON`,for example:
``` shell
OPENCV_DIR=your_opencv_dir #`your_opencv_dir` is the installation path of OpenCV library。
mkdir server-build-cpu && cd server-build-cpu
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON
-DSERVER=ON ..
make -j10
```
### Integrated GPU version paddle inference library
Compared with CPU environment, GPU environment needs to refer to the following table,
......@@ -249,10 +264,7 @@ The following is the base library version matching relationship used by the Padd
Download the corresponding CUDNN version from NVIDIA developer official website and decompressing it, add `-DCUDNN_ROOT` to cmake command, to specify the path of CUDNN.
### WITH_OPENCV Option
Compile the serving C + + server part. If WITH_OPENCV=ON and opencv library is not installed, please refer to the following instructions (if opencv library is installed, you can skip it)
#### Compile opencv
## Compile and install opencv
* First of all, you need to download the source code compiled package in the Linux environment from the opencv official website. Taking opencv3.4.7 as an example, the download command is as follows.
......@@ -309,19 +321,4 @@ opencv3/
|-- lib
|-- lib64
|-- share
```
#### compile C++ Server under the condition of WITH_OPENCV=ON
In the compile command, add `DOPENCV_DIR=${OPENCV_DIR}` and `DWITH_OPENCV=ON`,for example:
``` shell
OPENCV_DIR=your_opencv_dir #`your_opencv_dir` is the installation path of OpenCV library。
mkdir server-build-cpu && cd server-build-cpu
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON
-DSERVER=ON ..
make -j10
```
\ No newline at end of file
......@@ -116,6 +116,21 @@ make -j10
可以执行`make install`把目标产出放在`./output`目录下,cmake阶段需添加`-DCMAKE_INSTALL_PREFIX=./output`选项来指定存放路径。
### 开启WITH_OPENCV选项编译C++ Server
编译Serving C++ Server部分,开启WITH_OPENCV选项时,需要安装安装openCV库,若没有可参考本文档后面的说明编译安装openCV库。
在编译命令中,加入`DOPENCV_DIR=${OPENCV_DIR}``DWITH_OPENCV=ON`选项,例如:
``` shell
OPENCV_DIR=your_opencv_dir #`your_opencv_dir`为opencv库的安装路径。
mkdir server-build-cpu && cd server-build-cpu
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON
-DSERVER=ON ..
make -j10
```
### 集成GPU版本Paddle Inference Library
相比CPU环境,GPU环境需要参考以下表格,
......@@ -153,6 +168,7 @@ make -j10
**注意:** 编译成功后,需要设置`SERVING_BIN`路径,详见后面的[注意事项](https://github.com/PaddlePaddle/Serving/blob/develop/doc/COMPILE_CN.md#注意事项)
## 编译Client部分
``` shell
......@@ -250,10 +266,7 @@ Paddle Serving通过PaddlePaddle预测库支持在GPU上做预测。WITH_GPU选
从NVIDIA developer官网下载对应版本CuDNN并在本地解压后,在cmake编译命令中增加`-DCUDNN_LIBRARY`参数,指定CuDNN库所在路径。
### WITH_OPENCV选项
编译Serving C++ Server部分,若开启WITH_OPENCV选项,若没有安装openCV库,可参考以下说明(已安装则可跳过):
#### 编译opencv库
## 编译安装opencv库
* 首先需要从opencv官网上下载在Linux环境下源码编译的包,以opencv3.4.7为例,下载命令如下。
......@@ -308,19 +321,4 @@ opencv3/
|-- lib
|-- lib64
|-- share
```
#### 开启WITH_OPENCV选项编译C++ Server
在编译命令中,加入`DOPENCV_DIR=${OPENCV_DIR}``DWITH_OPENCV=ON`选项,例如:
``` shell
OPENCV_DIR=your_opencv_dir #`your_opencv_dir`为opencv库的安装路径。
mkdir server-build-cpu && cd server-build-cpu
cmake -DPYTHON_INCLUDE_DIR=$PYTHON_INCLUDE_DIR/ \
-DPYTHON_LIBRARIES=$PYTHON_LIBRARIES \
-DPYTHON_EXECUTABLE=$PYTHON_EXECUTABLE \
-DOPENCV_DIR=${OPENCV_DIR} \
-DWITH_OPENCV=ON
-DSERVER=ON ..
make -j10
```
\ No newline at end of file
......@@ -18,10 +18,9 @@ tar xf test_imgs.tar
## C++ OCR Service
### Start Service
```
#Select a startup mode according to CPU / GPU device
Select a startup mode according to CPU / GPU device
After the -- model parameter, the folder path of multiple model files is passed in to start the prediction service of multiple model concatenation.
```
#for cpu user
python -m paddle_serving_server.serve --model ocr_det_model ocr_rec_model --port 9293
#for gpu user
......@@ -29,7 +28,7 @@ python -m paddle_serving_server_gpu.serve --model ocr_det_model ocr_rec_model --
```
### Client Prediction
#The pre-processing and post-processing is in the C + + server part, the image's Base64 encoded string is passed into the C + + server
The pre-processing and post-processing is in the C + + server part, the image's Base64 encoded string is passed into the C + + server
so the value of parameter `feed_var` which is in the file `ocr_det_client/serving_client_conf.prototxt` should be changed.
for this case, `feed_type` should be 3(which means the data type is string),`shape` should be 1.
......
......@@ -18,10 +18,9 @@ tar xf test_imgs.tar
## C++ OCR Service服务
### 启动服务
```
#根据CPU/GPU设备选择一种启动方式
根据CPU/GPU设备选择一种启动方式
通过--model后,指定多个模型文件的文件夹路径来启动多模型串联的预测服务。
```
#for cpu user
python -m paddle_serving_server.serve --model ocr_det_model ocr_rec_model --port 9293
#for gpu user
......@@ -29,7 +28,7 @@ python -m paddle_serving_server_gpu.serve --model ocr_det_model ocr_rec_model --
```
### 启动客户端
#由于需要在C++Server部分进行前后处理,传入C++Server的仅仅是图片的base64编码的字符串,故第一个模型的Client配置需要修改
由于需要在C++Server部分进行前后处理,传入C++Server的仅仅是图片的base64编码的字符串,故第一个模型的Client配置需要修改
`ocr_det_client/serving_client_conf.prototxt``feed_var`字段
对于本示例而言,`feed_type`应修改为3(数据类型为string),`shape`为1.
通过在客户端启动后加入多个client模型的client配置文件夹路径,启动client进行预测。
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册