Low_Precision_EN.md 2.1 KB
Newer Older
T
TeslaZhao 已提交
1 2 3
## Low-Precision Deployment for Paddle Serving

(English|[简体中文](./Low_Precision_CN.md))
Z
update  
zhangjun 已提交
4 5

Intel CPU supports int8 and bfloat16 models, NVIDIA TensorRT supports int8 and float16 models.
Z
zhangjun 已提交
6

T
TeslaZhao 已提交
7
### Obtain the quantized model through PaddleSlim tool
Z
update  
zhangjun 已提交
8
Train the low-precision models please refer to [PaddleSlim](https://paddleslim.readthedocs.io/zh_CN/latest/tutorials/quant/overview.html).
Z
zhangjun 已提交
9

T
TeslaZhao 已提交
10
### Deploy the quantized model from PaddleSlim using Paddle Serving with Nvidia TensorRT int8 mode
Z
update  
zhangjun 已提交
11 12

Firstly, download the [Resnet50 int8 model](https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz) and convert to Paddle Serving's saved model。
Z
zhangjun 已提交
13 14 15 16 17 18
```
wget https://paddle-inference-dist.bj.bcebos.com/inference_demo/python/resnet50/ResNet50_quant.tar.gz
tar zxvf ResNet50_quant.tar.gz

python -m paddle_serving_client.convert --dirname ResNet50_quant
```
Z
update  
zhangjun 已提交
19
Start RPC service, specify the GPU id and precision mode
Z
zhangjun 已提交
20
```
Z
fix doc  
zhangjun 已提交
21
python -m paddle_serving_server.serve --model serving_server --port 9393 --gpu_ids 0 --use_trt --precision int8 
Z
zhangjun 已提交
22
```
Z
update  
zhangjun 已提交
23
Request the serving service with Client
Z
zhangjun 已提交
24 25 26 27 28 29 30
```
from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, File2Image, Resize, CenterCrop
from paddle_serving_app.reader import RGB2BGR, Transpose, Div, Normalize

client = Client()
client.load_client_config(
Z
zhangjun 已提交
31
    "serving_client/serving_client_conf.prototxt")
Z
zhangjun 已提交
32 33 34 35 36 37 38 39 40
client.connect(["127.0.0.1:9393"])

seq = Sequential([
    File2Image(), Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
    Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], True)
])

image_file = "daisy.jpg"
img = seq(image_file)
Z
zhangjun 已提交
41 42
fetch_map = client.predict(feed={"image": img}, fetch=["save_infer_model/scale_0.tmp_0"])
print(fetch_map["save_infer_model/scale_0.tmp_0"].reshape(-1))
Z
zhangjun 已提交
43 44
```

T
TeslaZhao 已提交
45
### Reference
Z
zhangjun 已提交
46
* [PaddleSlim](https://github.com/PaddlePaddle/PaddleSlim)
Z
update  
zhangjun 已提交
47
* [Deploy the quantized model Using Paddle Inference on Intel CPU](https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_x86_cpu_int8.html)
Z
fix doc  
zhangjun 已提交
48
* [Deploy the quantized model Using Paddle Inference on Nvidia GPU](https://paddle-inference.readthedocs.io/en/latest/optimize/paddle_trt.html)