• New features
    • Add ir_optim, use_mkl(only for cpu version)argument
    • Support custom DAG for prediction service
    • HTTP service supports prediction with batch
    • HTTP service supports startup by uwsgi
    • Support model file monitoring, remote pull and hot loading
    • Support ABTest
    • Add image preprocessing, Chinese word segmentation preprocessing, Chinese sentiment analysis preprocessing module, and graphics segmentation postprocessing, image detection postprocessing module in paddle-serving-app
    • Add pre-trained model and sample code acquisition in paddle-serving-app, integrated profile function
    • Release Centos6 docker images for compile Paddle Serving
  • Bug fixed
  • New documents
  • Performance optimization
    • Optimized the time consumption of input and output memory copy in numpy.array format. When the client-side single concurrent batch size is 1 in the resnet50 imagenet classification task, qps is 100.38% higher than the 0.2.0 version.
  • Compatibility optimization
    • The client side removes the dependency on patchelf
    • Released paddle-serving-client for python27, python36, and python37
    • Server and client can be deployed in Centos6/7 and Ubuntu16/18 environments
  • More demos
    • Chinese sentiment analysis task : lac+senta
    • Image segmentation task : deeplabv3、unet
    • Image detection task : faster_rcnn
    • Image classification task : mobilenet、resnet_v2_50

项目简介

A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)

🚀 Github 镜像仓库 🚀

源项目地址

https://github.com/PaddlePaddle/Serving

发行版本 14

Release v0.9.0

全部发行版

贡献者 36

全部贡献者

开发语言

  • C++ 51.6 %
  • Python 27.0 %
  • Shell 8.0 %
  • CMake 6.0 %
  • Go 4.4 %