From 92c54f352ec5fcd8dc4d6a9e228a21e1fc2dca66 Mon Sep 17 00:00:00 2001 From: Dong Daxiang <35550832+guru4elephant@users.noreply.github.com> Date: Sat, 22 Feb 2020 11:01:57 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ff393f64..254337e0 100644 --- a/README.md +++ b/README.md @@ -11,11 +11,11 @@ An easy-to-use Machine Learning Model Inference Service Deployment Tool Paddle Serving helps deep learning developers deploy an online inference service without much effort. Currently, Paddle Serving supports the deep learning models trained by [Paddle](https://github.com/PaddlePaddle/Paddle) althought it can be very easy to integrate other deep learning framework's model inference engine. ## Key Features -- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed with one line command
. +- Integrate with Paddle training pipeline seemlessly, most paddle models can be deployed **with one line command**. - **Industrial serving features** supported, such as models management, online loading, online A/B testing etc. - **Distributed Key-Value indexing** supported that is especially useful for large scale sparse features as model inputs. - **Highly concurrent and efficient communication** between clients and servers. -- **Multiple programming language** supported on client side, such as Golang, C++ and python +- **Multiple programming languages** supported on client side, such as Golang, C++ and python ## Installation -- GitLab