this is version 1.8.0

Release Note

重要更新

本版本深度优化了命令式编程模式(动态图)的功能、性能和体验,框架基础功能也进一步强化;原生推理库性能显著优化,轻量化推理引擎PaddleLite实现了对硬件支持的极大覆盖,新发布前端推理引擎Paddle.js,PaddleServing全面升级,提供功能强大简单易用的服务化部署能力。对应的开发套件和工具组件进一步丰富完善,既有套件组件的功能和体验持续提升,全新发布PaddleClas视觉分类套件和量桨Paddle Quantum量子机器学习框架。

训练框架: 深度优化了命令式编程(动态图)功能、性能和体验,特别是增强了动静转换的能力,能支持依赖数据的控制流的动态图实现进行静态存储部署,也可以转为静态图模式训练;Data Loader的功能和梯度裁剪的使用方式进一步优化;声明式编程模式下多卡运行时fetch不定长Tensor等问题得到解决,混合精度配合重计算显示出支持大Batch训练很好的成效。新增了大量API,并新增 ComplexVariable,支持复数张量的表示和常见的复数运算。

预测部署: Paddle inference 新增CUDA下多线程多流支持、TRT子图对动态shape输入的支持,强化量化推理,性能显著优化;Paddle Serving 全面升级,功能完善,易用性显著提升;Paddle Lite进一步优化编译安装体验,全面提升对支持芯片的覆盖度(包括RK、MTK、百度昆仑、寒武纪、比特大陆、华为NPU等等)以及对应的模型数量和性能;PaddleSlim量化、裁剪和NAS功能持续强化;发布国内首个开源JavaScript深度学习前端推理引擎Paddle.js,可以帮助用户实现网页端深度学习模型部署。

开发套件: 全新发布PaddleClas,包含23个图像分类网络实现,117个图像预训练模型,并添加了数据增广、SSLD蒸馏等辅助策略,以及特色应用案例;PaddleSeg人像分割系列模型全面升级,新增多种遥感相关的策略方案;PaddleDetection、PaddleOCR和语音合成套件Parakeet算法覆盖更全面,速度显著提升。

工具组件: PaddleHub新增包括一系列视觉预训练模型在内更多的模型,模型总数120+; PaddleFL发布1.0版本,开源基于Mulit-party Computation (MPC)的联邦学习,支持横向、纵向等多个联邦学习场景;PGL发布业界首个结合语义信息与结构信息的图神经网络模型ERNIESage;PARL开源工业界首个进化学习应用框架Evokit;全新发布量子机器学习框架量桨Paddle Quantum。

基础框架

新增API

  • 新增fluid.device_guard:设置OP的运行设备为CPU或者GPU。
  • 新增 fluid.enable_imperativefluid.disable_imperative 接口,支持函数式启动关闭动态图模式,相对with fluid.dygraph.guard()的方式减少代码缩进。
  • 在fluid.dygraph目录新增4个API(具体定义见文档): BCELoss, L1Loss, MSELoss, NLLLoss, InstanceNorm
  • 在fluid.layers目录新增30个API(具体定义见文档): addmm, allclose, arange, bmm, clamp, cross, diag_embed, dist, dot, elementwise_equal, flip, full, full_like, index_select, interpolate, log1p, log_softmax, logsumexp, meshgrid, nonzero, randint, randn, randperm, resize_bicubic, resize_linear, roll, t, tril, triu

功能优化

  • 命令式编程模式(动态图):

    • 增强动转静的功能,新增了基于语法解析转换的ProgramTranslator,支持依赖数据的控制流的动态图模型进行部署;同时支持将动态图模型转换为静态图模型进行训练,提升RNN等任务的训练性能。
    • 重构动态图的变量生命周期管理机制,保证在train模式下不调用var.backward()接口也能正确地释放内存/显存资源。
    • 新增动态图下的double grad功能,支持依赖梯度惩罚的GAN模型训练。
    • 针对动态图下no_grad只能通过装饰器的方式使用的问题,新增了支持context manager使用方式,更方便动态图无梯度操作的代码编写。
    • 为了方便单独设置batchnorm和dropout两个op的train/eval模式设置,将train/eval模式信息从全局设置,变成Layer内部设置;新增Layer形式的Dropout,记录模式信息。
    • 支持 cond switch while_loop 控制流接口和 tensor array 的读写也可在动态图下使用 ,便于高层API的统一。
    • 修改if var在动态图模式下的行为(不兼容升级),按var中的值进行判断,解决动态图模式下 if x > y 行为与预期不符的问题;并支持将var转换为float/long/int/len/index的功能,提动态图升易用性。
    • 针对任务中强依赖hook的功能,新增Layer的forward pre-hook和forward post-hook接口,可以在不改变网络输入输出的结构的情况下方便地获取、改变网络中间层变量的值,提升动态图易用性。
    • 支持cudnn algorithm cache可以在动态图模式下生效,在waveflow模型上性能提升200%。
  • 声明式编程模式(静态图):

    • 执行器支持根据feed和fetch变量在运行时自动裁剪网络,去掉与当前feed和fetch无关的部分,提升运行效率,支持多任务学习网络。
    • 优化反向传播过程,对本身无需反向传播的变量进行自动裁剪,不再需要组网时对变量显式设置stop_gradient=True。
    • 执行器支持多卡运行时fetch不定长Tensor的功能,对使用不定长数据的任务(如NLP类部分任务等)提供更好的支持。
    • 解决单进程多卡预测阶段会丢弃尾部不足卡数的部分数据的问题,可以在DataLoader中设置drop_last=False来避免丢弃尾部数据。
    • 增加混合精度(AMP)与重计算(Recompute)配合的机制,在Bert-large模型上配合使用二者,最大batch size提升400%,吞吐提升17.5%~31.4%;
  • DataLoader:

    • 新增多进程模式加速数据读取,对于Map-style类型的数据集,用户可以通过实现自定义Dataset和BatchSampler的方式来提高数据读取性能,对于数据读取量大及预处理复杂的任务速度提升明显,如在视频分类TSM模型上,使用多进程读取模式,在声明式编程模式(“静态图”)下训练性能提升419%,命令式编程模式(“动态图”)下训练性能提升89.6%。
  • 梯度裁剪使用方式:

    • 裁剪类型统一由optimizer的grad_clip参数传入,支持全局参数裁剪和部分参数裁剪功能,原有set_gradient_clip接口不再推荐使用,并可能在后续版本中删除。同时,ParamAttr中取消了grad_clip参数(不兼容升级),无法再通过ParamAttr对单个参数进行梯度裁剪,对部分参数进行梯度裁剪的功能统一通过上述新接口实现。
  • 动态图、静态图以及高层API支持一致的Collective Operators调用。

  • Intel对Ngraph停止维护,移除NGraph库相关代码。

  • 移除所有MKL-DNN相关op中未使用的或者不兼容的属性,如is_test属性。

  • 新增对复数计算的支持:

    • 新增 ComplexVariable,支持复数张量的表示和常见的复数运算,包括四则基本运算、matmul、kron product、reshape、transpose 等;
  • 性能分析工具(Profiler)功能升级:

    • 支持按照事件之间的嵌套调用关系,分层次统计和打印Profile结果。
    • 添加tracer_option参数,可配置为DefaultOpDetailAllOpDetail,支持用户选择不同程度的计时和分析对象。
    • 添加对框架开销、GpuMemcpy操作的统计功能。
  • 报错信息全面优化

    • 累计优化数千条表意不清的报错信息,规范错误类型及错误描述。
    • 自动检测某些用户易错操作,给出明确的报错信息。
    • 优化GPU相关API报错信息,将不可读的错误代码转换为具体信息,与NVIDIA官网信息保持同步。

性能优化

  • 命令式编程模式(“动态图”):

    • 为降低框架overhead, 优化自动生成的OP函数的数据结构,ptb lm模型任务单卡训练速度提升4%。
    • 为降低框架overhead, 优化InferVarType接口设计,提升了InferVarType的速度,ptb lm模型任务训练速度提升超5%。
    • 为降低框架overhead, 减少了动态图op中非必要attribute的添加,在ptb lm模型训练任务上速度提升4%
    • 为提升数据加载性能,实现Tensor申请共享内存存储及Tensor序列化与反序列化机制,支持进程间传输Tensor,优化原动态图异步DataLoader性能,ResNet模型任务在P40机器上单卡训练速度进一步提升超15%
    • 优化了动态图 Variable slice 的性能,性能提升60%,并支持slice中step为负数。
  • 声明式编程模式(“静态图”):

    • 新增自动融合功能,支持将elementwise类、activation类、sum、cast、scale、fill_constant等逐元素计算类型的算子组成的子图进行融合,性能提升幅度依赖于在网络中匹配到的相关子图数量,目前对RNN语言模型训练速度有大幅提升。
  • OP性能优化:

    • OP的执行过程增加对Prepare Data的缓存,在10+模型训练任务上平均加速2%,框架开销最高减少6%。
    • 优化depthwise_conv2d的GPU性能,常用参数设置下加速20%。
    • 优化elementwise_mul的GPU广播模式的实现,针对不同输入可加速2~50倍。
    • 优化conv2d_transpose的GPU实现,针对fp16有显著性能提升。
    • 优化shape OP实现,避免在不同设备间的不必要数据传输而引起的等待。

Bug修复

  • 修复当数据量很大时,SGD报错Xbyak::Error的问题, 使得支持SGD大数据量可以成功运行。

  • 修复Linux版本下MKL内存泄漏的问题。

  • 修复动态图多卡启动时命令行参数解析的bug。

  • 修复clone(for_test=True)接口处理含控制流op的网络时的问题。

  • 修复动态图模块和静态图模块环依赖。

  • 修正 python 2 & 3 之间 pickle dump/load 的兼容性问题。

  • 修复动态图Layer不能注册或覆盖参数为None的问题。

  • 修复不同Op name属性值相同时造成的Op输出Var命名冲突的问题。

  • 修正concat op在axis=0时输出的LoD设置,应为输入LoD的拼接。

  • 修复BatchNorm在eval模式下无法更新的mean和var的bug。

推理部署

Paddle Inference

功能升级

  • 新增TRT子图对动态shape输入的支持, 新加config.SetTRTDynamicShapeInfo(min_input_shape, max_input_shape, opt_input_shape)接口。此接口用来指定子图的输入的最小,最大,最优的shape信息(最优shape表示,TRT会在此shape选择运行时最优kernel)。指定shape信息后,Paddle-TRT运行期间会使用Dynamic shape模式,预测期间支持min_input_shapemax_input_shape间的任意shape的输入。该功能支持包括FCN,Faster RCNN,Ernie/Bert等动态shape输入的模型。
  • 为满足用户预测时将计算流绑定在当前线程上的需求,重构了设备上下文数据结构支持 CUDA 计算流优先级,并增加一个线程本地的显存分配器 ThreadLocalAllocator。具备不同线程绑定不同 CUDA 流的能力。
  • MKL-DNN 量化功能全面支持所有量化模型,新增支持'weight_quantize_type'为range_abs_max和'channel_wise_abs_max',支持out_threshold属性。
  • 新增官网推理API reference

性能优化

  • CUDA Bert/Ernie针对性的优化, 添加了 embedding_eltwise_layernorm 融合实现,优化了 multihead_matmulfc_elementwise_layernorm 融合实现。相比上一版本,P4卡,cuda10,batch_size=1下,ernie fp32预测从10ms优化至8.7ms。提升13%.
  • TRT子图对Ernie/Bert模型动态shape支持, 在T4卡,cuda10, batch_size=1下,ernie fp16 预测性能为2.9ms, 相比fp32的6.6ms,加速56%。
  • Paddle-TRT对mobilenet v3的优化,支持TRT hard sigmoid OP,以及新增hard swish plugin,batch_size = 1下,P4下预测从3.48ms 到2.29ms, 性能提升34%; V100下预测从2.76ms 到1.33ms, 性能提升51%。
  • 增加 swish 激活函数 DNNL 支持,使得ShuffleNet 在 6248单核上性能提升了76%。
  • 量化:新增支持matmul op量化;新增matmul+transpose+reshape fuse,scale+matmul fuse。经过matmul量化和新增fuses,Ernie fp32模型和量化后INT8模型都在原来基础上性能提升了~10%(在6271机器上)。
  • 新增 DNNL inplace op支持:目前支持 elementwise_add和包括softmax, gelu,relu等大部分激活函数的inplace执行,使得Ernie性能在6248上提升了~2%
  • 经过上述优化量化,目前Ernie INT8模型相比未经DNNL优化(包括fuses等)和量化的FP32模型提速~5.51倍。

Bug修复

  • 修复Inference阶段在TRT int8离线量化中,因融合策略不稳定导致本地与服务端生成校准表名字不一致,从而本地生成的校准表在服务中无法识别加载,会重新生成校准表的问题。目前已经能够保证在多次运行TRT离线量化校准时,校准表名字保持一致。
  • 修复Inference阶段TRT离线量化产生校准表过程中传参错误的问题。该问题会一定程度上影响最终的量化预测精度。

Paddle Serving

易用性提升

  • 使用pybind对c++代码进行封装,提供python api的使用方式,提供paddle_serving_server、paddle_serving_server_gpu、paddle_serving_client的python2和python3环境whl安装包,发布了0.2.1版本
  • 提供centos6/7环境的cpu和gpu Docker镜像,包含可执行镜像和可编译镜像
  • 提供直接保存Serving部署所需的模型和配置文件的api,与Paddle训练框架无缝衔接
  • 实现一行命令启动模型预测服务

功能完善

  • 提供RPC和HTTP两种预测服务方式
  • 支持Python和Go语言客户端
  • 支持A/B test
  • 发布了paddle_serving_app 0.0.2版本,针对LAC分词分词预处理、中文BERT模型预处理、图像处理提供预处理api。
  • 支持预测服务Timeline可视化

性能优化

  • RPC服务模式下,中文BERT语义向量表示预测服务比paddle_gpu_serving 0.8.2版本在单张P4卡batch size 1时预测速度提升2.04倍。

文档和示例

  • 完善和添加中英文使用文档、中英文开发和部署文档、中文性能调优文档。
  • 提供7种模型预测服务示例,包含中文分词、英文情感分析、中文语义向量表示、CTR预估、图像分类等领域。

Paddle Lite

功能升级

  • 编译安装
    • Paddle-Lite 编译脚本优化:Android\iOS\ArmLinux 平台各拆分出单独编译脚本,脚本提高易用性。
    • 支持Python安装:可以在PC Linux/Windows/Mac 上安装Paddle-Lite Python 库;Python 可以调用Lite opt 优化模型。
    • 支持windows 编译: 可以在windows环境编译Paddle-Lite ,当前windows环境只支持x86 编译。
  • 基础功能
    • 增加分割子图功能。对于以子图接入方式lite的模型,通过配置文件手动切分子图,让指定OP跑在host端,以提高性能(ssd_mobilenet_v1,加速约4.3倍)。
    • 优化支持无校准训练后量化方法产出的量化模型,常见分类模型量化到8bit,精度损失从2%减小到0.1%。
  • 硬件支持
    • 新增RK 1808 NPU,支持全量化MobileNetV1模型;
    • 新增MTK MT8175 APU,支持全量化MobileNetV1模型;
    • 新增百度昆仑XPU Kernel接入方式,支持ERNIE、ResNet-50和BERT模型。
    • 新增寒武纪MLU270,支持一下模型:Resnet50(int8)、Senet101(int8);
    • 新增特大陆BM1682,支持以下模型: Mobilenet、Yolov3、Mobilenet-ssd、Inceptionv4、Vgg16、DarkNet-YOLOv3、PyramidBox。
    • 移动端GPU(opencl):支持模型mobilenetv1/v2、GAN相关、mnasnet、sqeueezenet、shufflenet、resnet、unet、vgg16
    • Nvidia GPU: 增加exec多流支持,对于存在并行性的模型结构,相对单流预计有5-15%的性能提升,对于常见视觉模型,一般不具有并行性结构,开启多流无收益。cuda平台下打开多流功能config.set_multi_stream(true);
    • 对x86 平台的优化:降低预测库体积(200M---->16M),支持关闭LOG(--shutdown_log=ON)、full_api 支持多线程共享模型权重参数、新增x86 cxx_demo
    • 华为NPU:
      • benchmark模型(mobilenet_v1,mobilenet_v2,squeezenet_v1.1,mnasnet,shufflenet_v2),提速5-10倍。
      • 支持缓存不同尺寸的NPU模型,提升可变输入尺寸模型的性能。
  • Demo:
    • 新增基于相机预览的实时口罩检测Android Demo
    • 新增实时人脸关键点检测和美颜Android Demo
    • 新增移动端训练的波士顿房价预测Android Demo

性能优化

  • InferShape耗时降低: Predictor连续运行时,infershape总耗时从0.25ms 降低至0.08ms
  • opencl部分kernel支持动态shape并将部分计算移到ReinitWhenNeeded。fc_buffer、elementwise_add、scale、activation、grid_sampler。
  • 优化sgemm在低端机上的性能表现
  • 优化Precision Profiler功能。排版优化,新增支持标准差属性、增长率属性(在均值和标准差一样时,可以比较顺序),支持对OpenCL的Image/Buffer的每层output的精度打印,支持将每层的精度结果(最终的precision summary)写入手机设备上,便于APP调试,将精度打印与原有统计耗时的profiler的依赖分开。

Bug修复

  • 修复conv op的激活act_type未初始化导致的不同Predictor结果随机的问题。
  • 修复opencl kernel。bilinear kernel在mali gpu上兼容性问题、instance norm计算结果不对的问题、reshape的kernel注册错误导致模型转换失败问题、exp和tanh找不到kernel的导致注册kernel名写错绑定模型op失败问题。
  • 修复opencl在mali gpu的执行计算结束卡主的问题。
  • 修复opencl的资源相关问题。隔离每个Predictor中每个cl::kernel/cl::program等资源。

PaddleSlim

量化

  • 增加无校准数据训练后量化方法,int16精度无损,int8精度损失低于0.1%。
  • 增强量化功能,完善量化OP的输出scale信息,支持CPU预测端全面适配量化模型。

剪裁

  • 新增FPGM和BN scale两种剪裁策略, 在MobileNetV3-YOLOV3-COCO任务上,同等压缩率下精度提升0.6% 。
  • 新增自定义剪裁策略接口,方便开发者快速新增压缩策略。
  • 剪裁功能添加对新增Operator的默认处理逻辑,扩展支持剪裁更多复杂网络。

NAS

  • 新增DARTS系列搜索算法,并提供扩展接口,方便用户调研和实现新的模型结构搜索策略。
  • 模型结构搜索添加早停机制,提升搜索功能易用性。
  • 新增一种基于强化学习的模型结构搜索策略,并提供扩展接口,为用户调研实现新策略提供参考。

Pantheon

  • 支持以 fp16 格式进行数据的传输和存储,支持在线蒸馏模式下用多个通道进行知识传输,加大知识数据的传输效率。
  • 新增词法分析示例,方便用户基于此构建自己的蒸馏任务

开发套件

PaddleDetection

  • 模型丰富度提升

    • 添加Efficientdet-D0: COCO val2017精度较TF高0.3 (33.8 vs 33.5), 不含后处理预测速度基本持平或微弱优势(~13ms vs ~14ms,T4实测速度) 。
    • 添加实例分割模型HTC,V100下推理速度达到11.5FPS, 较竞品高7.4FPS,在COCO 2017下BBox mAP 42.1%, Mask 37.1。
    • 添加anchor-free模型FCOS: COCO val2017精度较pytorch精度高1.1(39.8 vs 38.7)。
    • YOLOv3新增MobileNetV3骨干网络,COCO数据集精度达到31.6 。
    • 添加anchor-free模型CornernetSqueeze:COCO val2017精度34.5, 较竞品高0.1, 优化模型精度38.2, 提升3.7个点,速度较yolo_v3 darknet快5%
    • 添加服务器端实用目标检测模型cascade_rcnn_resnet50_vd_fpn_dcn,V100上20FPS时,coco mAP 47.8%,优于竞品EfficientDet。
  • 移动端推出3种模型

    • SSDLite系列模型:ssdlite-mobilenet_v3 large模型在COCO数据集上mAP:22.8%,在高通骁龙845上单线程推理速度95ms。ssdlite-mobilenet_v3 small模型在COCO数据集上mAP:16.6%,在高通骁龙845上单线程推理速度40ms,精度优于竞品。ssdlite-mobilenet_v1模型在COCO数据集上mAP:23.6%,在高通骁龙845上单线程推理速度140ms,精度优于竞品。
    • yolo v3: yolov3_mobilenet_v3裁剪模型在高通骁龙845上单线程推理速度91ms,精度24.6(输入尺寸320*320),速度和精度均领先于竞品框架SSDLite模型。
    • Faster RCNN:基于COCO数据集,cascade_rcnn_mobilenet_v3 large_fpn在输入图片尺度为320x320时,mAP为25.0%,在高通骁龙845上单线程推理速度87ms;在输入图片尺度为640x640时,mAP为30.2%,在高通骁龙845上单线程推理速度351ms。
  • 预测部署重构:

    • 新增Python预测部署流程,支持RCNN,YOLO,SSD,RetinaNet,人脸系列模型。支持视频预测。
    • 重构C++预测部署,提高易用性。
  • 易用性提升及功能组件

    • 增加AutoAugment数据增强。
    • 升级检测库文档结构。
    • 支持迁移学习自动进行shape匹配。
    • 优化mask分支评估阶段内存占用。
    • 升级预测部署功能,增加python端图像与视频预测。

PaddleSeg

  • 新增Lovasz Loss损失函数,可有效提升多类别分割的精度

  • 人像分割系列模型全面升级

    • 发布首个支持移动端实时人像分割模型HumanSeg-lite
    • 新增基于光流算法的视频级别的分割后处理方案
  • 新增遥感图像分割解决方案

  • 新增多通道遥感图像的数据预处理方案
  • 新增适用于多通道图像的数据增强策略
  • 提供积雪检测和云检测两种气象遥感领域分割教程

PaddleClas

  • 新增MobileNetV3系列模型,并且对23个系列,117个预训练模型进行性能评估。
  • 新增SSLD知识蒸馏方案,识别准确率提升3%以上,并发布82.4%的resnet50_vd、78.9%的mobilenetv3等6个蒸馏模型。
  • 新增8种数据增广方式:AutoAugment,RandAugment,CutOutRandErasing,HideAndSeek,GridMask,Mixup,Cutmix,用于增加训练样本的多样性,提升模型的泛化性能。
  • 新增10万类图像分类预训练模型,针对图像分类业务应用场景,识别准确率最高可提升30%。

PaddleOCR

  • 新增DB、EAST文本检测算法。
  • 新增Rosetta、CRNN、STAR-Net以及RARE文本识别算法。
  • 新增超轻量级OCR模型,总共模型大小仅8.6M(文本检测4.1M,文本识别4.5M),同时支持横排和竖排、长文本、中英文数字混合等场景文字的识别。

Parakeet

  • 发布 WaveFlow (res channel=64/128)、ClariNet、WaveNet 等模型的英文预训练模型和音频样本;
  • 修复 Conv2DTranspose 的 fp16 kernel 速度过慢的问题,简化 WaveFlow 在 fp16 模式下的预测逻辑;
  • 显著提升模型训练速度,通过优化数据预处理和 OP 计算逻辑,在 DeepVoice3、TransformerTTS 等模型上均带来了成倍的速度提升;

工具组件

PaddleHub

  • 视觉模型丰富度提升,预训练模型总数,预训练模型总数达到120+。
    • 新增大规模视觉预训练模型,可大幅度提升图像分类和目标检测任务的Fine-tune效果
    • 新增工业级短视频分类模型VideoTag,支持3000类中文标签识别
    • 新增轻量级中文OCR模型,支持一键快速OCR识别
    • 新增行人检测、车辆检测、动物识别、Object365 2019大规模目标检测夺冠模型
  • Fine-tune API升级
    • 文本分类任务新增5个预置网络,包括CNN, BOW, LSTM, BiLSTM, DPCNN等
  • 动态图能力升级
    • BERT类预训练模型支持动态图模式下的一键加载

PaddleX

  • 全新发布PaddleX飞桨全流程开发工具
  • 打通从数据接入到预测部署的深度学习开发全流程、并提供简明易懂的Python API
  • 覆盖CV领域图像分类、目标检测、语义分割、实例分割四大主流任务场景,并集成PaddleHub、PaddleSlim、VisualDL、Paddle Lite等工具组件。
  • 预置产业实践精炼沉淀预训练模型及诸多飞桨特色优势模型26类,共计43个。
  • 提供自动数据分析、自动超参推荐、数据增强策略、模型裁剪训练、模型量化、预训练模型保存及复用、多平台发布部署、模型加密等进阶功能。
  • 创新集成模型可解释性分析功能
  • 提供官方实现可视化前端Demo,支持Windows、Linux、Mac系统一键安装。

VisualDL

  • 发布VisualDL 2.0 beta版本
  • 后端内核全新升级,更轻更快,兼容性更高,支持文件存储系统拓展
  • API全面升级,更少代码完成可视化分析,显著提升易用性
  • UI与交互全新升级,提供更好的本地化支持,可视化分析更清晰、直观,给用户沉浸式体验
  • 与飞桨开发套件与工具组件深度融合,提供更流畅的深度学习开发体验

PaddleFL

  • 发布PaddleFL 1.0版本
    • 开源基于Mulit-party Computation(MPC)的联邦学习,支持横向、纵向等多个联邦学习场景
    • 原有框架重构,将新增联邦学习方案与原有方案整合并开源
    • 新增由单机模型转变为FL可训练program的功能,支持更多模型及场景

PGL

  • 发布业界首个结合语义信息与结构信息的图神经网络模型ERNIESage
  • 新增PGL-KE,目前PGL涵盖游走类、消息传递类以及知识嵌入类等25+图学习模型
  • 新增Graph Batch、Graph Pooling等Graph类操作算子
  • 全面支持Open Graph Benchmark基准测试集,并发布相应SOTA
  • Model Zoo新增MetaPath2Vec++、Mulit-MetaPath2Vec++、STGCN、GIN、PinSage模型

PARL

  • 开源工业界首个进化学习应用框架EvoKit
  • 新增Multi-Agent RL算法支持,包括MADDPG
  • 新增多卡训练支持,发布多卡DQN算法示例
  • 开源连续控制领域的SOTA算法TD3和SAC
  • NeurIPS2019强化学习挑战赛事冠军模型以及训练方案开源

Paddle Quantum(量子计算实验室)

  • Paddle Quantum(量桨)初版发布。量桨是基于百度飞桨开发的量子机器学习工具集,支持量子神经网络的搭建与训练,提供易用的量子机器学习开发套件与量子优化、量子化学等前沿量子应用工具集,使得飞桨成为国内首个支持量子机器学习的深度学习平台。
    • 支持 QAOA 算法实现,完成最大割 (Max-Cut) 问题的解决
    • 支持 VQE 算法实现,计算 H_2 的最小特征值
    • 支持 SSVQE 算法实现,计算给定哈密顿量的特征谱
    • 支持 VQSD 算法实现,计算量子态对角化后的形式,给出量子态的特征分解
    • 支持 Gibbs 算法实现,生成给定哈密顿量在确定温度下的吉布斯态
    • 支持量子计算常用函数
    • 支持描述U_Ansatz量子电路

1.8.0 Release Note

Important Updates

This version deeply optimizes the function, performance, and experience of the imperative programming mode (dynamic graph), and further strengthens the basic functions of the framework. It also significantly optimizes the performance of the native inference library, provides a lightweight inference engine Paddle Lite to achieve a great coverage of hardware support, rcomprehensively upgrades Paddle Serving, and has a powerful and simple service-oriented deployment capability. This version further enriches and improves the corresponding development kits and utility components, continues to improve the function and experience of the existing kits and components, and releases a new image classification kit,i.e., and Paddle quantum machine learning framework.

Training framework: Deeply optimizes the function, performance, and experience of imperative programming (dynamic graph) and especially enhances the capability of converting dynamic graph to static graph. Supports to convert data-dependent control flow into static graph to save and deploy, or train under static graph mode. Further optimizes the function of Data Loader and the usage of gradient clipping. Fixes problems for declarative programming mode such as fetching tensors with different lengths between multi-cards. The combination of mixed precision and recomputation shows good results in large-batch training. Adds a number of APIs and ComplexVariable and supports complex number tensor expressions and common complex number operations.

Inference Deployment: For Paddle inference, adds the multi-threaded multi-stream support under CUDA and the TRT sub-map's support for the input of dynamic shape, strengthens quantization inference, and significantly optimizes the performance. Fully upgrades Paddle Serving, improves its function, and significantly enhances its usability. Further optimizes the compilation and installation experience of Paddle Lite, comprehensively improves the coverage of supported chips (including RK, MTK, Baidu Kunlun, Cambricon, Bitmain, and Huawei NPU) as well as the corresponding model quantity and performance. Continues to strengthen the PaddleSlim quantization, pruning, and NAS functions. Releases the added Paddle.js which is the first open source front-end inference engine for deep learning of JavaScript in China and can help users to implement the deployment of deep learning models on the webpage side.

Development kits: Releases PaddleClas including 23 image classification network implementations and 117 image pre-training models. Adds data augmentation, SSLD distillation, and other auxiliary strategies as well as characteristic application cases. Fully upgrades the PaddleSeg portrait segmentation series of models and adds multiple remote sensing related strategies. The coverage of PaddleDetection, PaddleOCR, and text-to-speech kit Parakeet algorithms is more comprehensive and the speed is increased significantly.

Utility Components: For PaddleHub, adds more models including a series of vision pre-training models, total number of pre-trained models is more than 120. Releases PaddleFL Version 1.0, open sources federated learning based on mulit-party computation (MPC), and supports multiple federated learning scenarios such as horizontal and vertical layout. For PGL, releases industry's first graphical neural network model ERNIESage which combines semantic information with structural information. For PARL, open sources the industry's first evolutionary learning application framework Evokit. Releases a new quantum machine learning framework Paddle Quantum.

Basic Framework

New APIs

  • Adds fluid.device_guard: Sets an OP's running device to CPU or GPU.
  • Adds fluid.enable_imperative and fluid.disable_imperative, to enable and disable dynamic graph mode, and avoid code indentation relative to with fluid.dygraph.guard().
  • Adds four APIs in the fluid.dygraph directory (see the document for details): BCELoss, L1Loss, MSELoss, NLLLoss, and InstanceNorm
  • Adds 30 APIs in the fluid.layers directory (see the document for details): addmm, allclose, arange, bmm, clamp, cross, diag_embed, dist, dot, elementwise_equal, flip, full, full_like, index_select, interpolate, log1p, log_softmax, logsumexp, meshgrid, nonzero, randint, randn, randperm, resize_bicubic, resize_linear, roll, t, tril, and triu

Function Optimization

  • Imperative Programming Mode (Dynamic Graph):

    • Enhances the dynamic-to-static function, adds ProgramTranslator based on grammar analysis and transformation, and supports the deployment of dynamic graph model with data-dependent control flow; supports the transformation of the dynamic graph model into the static graph model for training and improves the training performance of tasks such as RNN.
    • Reconstitutes the variable life cycle management mechanism of the dynamic graph to ensure that memory/GPU memory resources can be released correctly in train mode without calling the var.backward() API.
    • Adds the double grad function in dynamic graph mode and supports the GAN model training relying on gradient penalty.
    • To solve that can only use decorator method to set no_grad in dynamic graph mode , adds the context manager method to facilitate the coding of gradientless operations of the dynamic graph.
    • To facilitate separate setting of the train/eval mode of batchnorm and dropout ops, changes the train/eval mode information to the internal setting of Layer from the global setting. Adds Layer-formed Dropout with the mode information.
    • Supports the use of the cond switch while_loop control flow interfaces and tensor array read-write in dynamic graph mode to facilitate the unification of high-level APIs.
    • Modifies the behavior of if var in the dynamic graph mode to make a judgment according to the value in var (incompatible upgrade). Fixes the problem if x > y behavior is not consistent with the expectation in dynamic graph mode. Supports the function of converting var into float/long/int/len/index to enhance the usability of the dynamic graph.
    • For the functions that strongly rely on hook in tasks, adds the forward pre-hook and forward post-hook APIs for Layer to easily obtain and change the values of variables of the network without changing the structure of network input and output, thus improving the usability of the dynamic graph.
    • Supports the validity of the cudnn algorithm cache in dynamic graph mode and improves the performance by 200% on the waveflow model.
  • Declarative Programming Mode (Static Graph):

    • The executor supports automatic pruning of the network during run time according to the feed and fetch variables. Remove the parts irrelevant to the current feed and fetch to improve the running efficiency. Supports the multi-task learning network.
    • Optimizes the back propagation process. Automatically prunes the variables that do not need to be back propagated. Explicitly sets stop\_gradient=True for variables when networking is not required.
    • The executor supports to fetch variable-length tensors of multi-card to provide the better support for tasks that use variable-length data (e.g. some NLP tasks).
    • Fixes the problem of discarding some tail data about the insufficient number of cards in the single-process multi-card inference phase by setting drop\_last=False in DataLoader to avoid discarding the tail data.
    • Adds a mixed precision (AMP) and recomputation combination mechanism. When they are jointly used for the Bert-large model, the maximum batch size and the throughput are increased by 400% and 17.5%-31.4% respectively.
  • DataLoader:

    • Adds accelerated data reading in multi-process mode. For the Map-style type of dataset s, users can improve the data reading performance by implementing user-defined Dataset and BatchSampler. The speed is significantly increased for tasks with a large amount of data reading or complex pre-processing. For example, the multi-process reading mode is used for the video classification TSM model. The training performance is improved by 419% in declarative programming mode ("static graph") and 89.6% in imperative programming mode ("dynamic graph").
  • Usage Method of Gradient Pruning:

    • The clipping type is passed in by the optimizer's grad\_clip parameter. The global and partial clipping functions are supported. The original set\_gradient\_clip API is no longer recommended and may be removed in subsequent versions. The grad\_clip parameter is removed in ParamAttr (incompatibility upgrade). Gradient clipping of a single parameter cannot be performed through ParamAttr. Gradient clipping of some parameters can only be implemented through the above new APIs.
  • Dynamic graphs, static graphs, and high-level APIs support consistent call of collective operators.

  • Intel stops maintenance on Ngraph and removes the codes related to the NGraph library.

  • Removes unused or incompatible attributes such as the is_test attribute from all MKL-DNN-related ops.

  • Adds the Support for Complex Number Computation:

    • Adds ComplexVariable and supports complex number tensor expressions and common complex number operations, including four basic operations, matmul, kron product, reshape, and transpose.
  • Function Upgrade of the Performance Analysis Tool (Profiler):

    • Supports hierarchical statistics and printing of profile results based on nested call relationships between events.
    • Adds the tracer_option parameter which can be configured as Default, OpDetail, and AllOpDetail. Supports users to select different levels of timing and analysis objects.
    • Adds the statistic function of framework overhead and GpuMemcpy operations.
  • Full Optimization of Error Messages

    • Optimizes an accumulative total of thousands of vague error messages and standardizes the error type and description.
    • Automatically detects some user misoperations and gives clear error messages.
    • Optimizes GPU-related API error messages, converts unreadable error codes into specific messages, and keeps synchronous with information on NVIDIA's official website.

Performance Optimization

  • Imperative Programming Mode ("Dynamic Graph"):

    • Optimizes the data structure of the automatically generated OP function to reduce the framework overhead, the trainning speed of ptb lm model increased by 4% on single card.
    • Optimizes the design of the InferVarType interface to reduce the framework overhead, raises the speed of InferVarType, and increases the training speed of ptb lm model by over 5%.
    • Reduces the unnecessary addition of attributes in the dynamic graph ops to reduce the framework overhead,b and increases the training speed of ptb lm model by 4% .
    • To improve the data loading performance, i cmplements the tensor application shared memory storage and the tensor serialization and deserialization mechanism, supports the transmission of tensors between processes, optimizes the asynchronous performance of DataLoader in dynamic graph mode, and further increases the single card training speed of ResNet model on the P40 machine by over 15%.
    • Optimizes the performance of the dynamic graph variable slice by 60% and supports step in the slice to be negative.
  • Declarative Programming Mode ("Static Graph"):

    • Adds the automatic fusion function. Supports the fusion of subgraphs composed of elementwise, activation, sum, cast, scale, fill_constant, and other element-wise operators. The performance improvement depends on the number of related subgraphs matched in the network. Currently, the training speed of the RNN language model is greatly improved.
  • OP Performance Optimization

    • Adds caches for Prepare Data during the OP execution process, accelerates by an average of 2% for 10+ model training tasks, and reduces the framework overhead by up to 6%.
    • Optimizes the GPU performance of depthwise_conv2d to accelerate by 20% at the common parameter settings.
    • Optimizes the implementation of the GPU broadcast mode of elementwise_mul to accelerate by 2-50 times for different inputs.
    • Optimizes the GPU implementation of conv2d_transpose to achieve significant performance improvement for fp16.
    • Optimizes the implementation of shape OP to avoid waiting due to unnecessary data transfer between different devices.

Bug Fixes

  • To ensure successful operation at a large SGD data size, fix the SGD error Xbyak::Error problem that occurs when the data size is very large.

  • Fix the MKL memory leak problem under the Linux version.

  • Fix the bug of command line parameter parsing at the dynamic graph multi-card startup.

  • Fix the problem that occurs when the clone(for_test=True) interface processes a network containing the control flow op.

  • Fix cyclic dependency between the dynamic and static graph modules.

  • Fix the compatibility problem of pickle dump/load between Python 2 & 3.

  • Fix the problem of the parameter not being registered or overridden as none for the dynamic graph layer.

  • Fix the problem of Op output Var naming conflict caused when different Op names have the same attribute value.

  • Fix the output LoD setting when axis=0, which shall be the splicing of input LoD.

  • Fix the bug of BatchNorm mean and var that cannot be updated in eval mode.

Inference Deployment

Paddle Inference

Function Upgrade

  • Adds TRT submap's support for dynamic shape input as well as the config.SetTRTDynamicShapeInfo(min_input_shape, max_input_shape, opt_input_shape) interface. This interface is used to specify the minimum, maximum, and optimal shape information of the input of the submap (Optimal shape means that TRT will select the runtime optimal kernel at this shape). After the shape information is specified, the Dynamic shape mode is used during the Paddle-TRT operation and the input of any shape between max_input_shape and min_input_shape is supported during the inference. This function supports FCN, Faster RCNN, Ernie/Bert, and other dynamic shape input models.
  • To meet the need for binding the computation flow to the current thread during user inference, refactors the device context data structure to support the CUDA computation flow priority, and adds a thread local GPU memory allocator ThreadLocalAllocator. Has the ability to bind different threads to different CUDA streams.
  • The MKL-DNN quantization function fully supports all quantitative models. Adds the support for 'weight_quantize_type' as range_abs_max and 'channel_wise_abs_max'. Supports the out_threshold attribute.
  • Adds official website inference API reference

Performance Optimization

  • For the targeted optimization of CUDA Bert/Ernie, adds embedding_eltwise_layernorm fusion implementation and optimizes the multihead_matmul and fc_elementwise_layernorm fusion implementation. Compared with the previous version, the ernie fp32 inference is optimized to 8.7 ms from 10 ms or by 13% under the conditions of P4 card, cuda10, and batch_size=1.
  • TRT submap's support for the dynamic shape of the Ernie/Bert model. Under the conditions of T4 card, cuda10, and batch_size=1, the ernie fp16 inference performance is 2.9 ms, which is accelerated by 56%, compared with 6.6 ms for fp32.
  • Optimization of mobilenet v3 by Paddle-TRT. Supports TRT hard sigmoid OP and adds a hard swish plugin. The inference is optimized to 2.29 ms from 3.48 ms or by 34% under the conditions of batch_size = 1 and P4, or to 1.33 ms from 2.76 ms or by 51% under the conditions of V100.
  • Adds the support for the swish activation function DNNL so that the ShuffleNet performance is improved by 76% on the 6248 single-core processor.
  • Quantization: Adds the support for matmul op quantization; adds the matmul+transpose+reshape fuse and the scale+matmul fuse. After matmul quantization and fuse addition, the performance of the Ernie fp32 and quantized INT8 models is improved by about 10%(on the 6271 machine).
  • Adds the support for DNNL inplace op: Currently, the execution of inplace of elementwise_add and most activation functions including softmax, gelu, and relu are supported so that the Ernie performance is improved by about 2% on 6248.
  • After the above optimization and quantization, the speed of the current Ernie INT8 model is increased by about 5.51 times compared with the FP32 model on which DNNL optimization (including fuses) and quantization are not performed.

Bug Fixes

  • Fixes the problem of failure to identify and load a locally generated calibration table in the service and regeneration of a calibration table due to inconsistency of locally and server generated calibration table names resulting from unstable fusion strategies in the TRT int8 off-line quantization in the inference phase. Currently, the calibration table name can remain consistent when TRT off-line quantization calibration runs for multiple times.
  • Fix the problem of parameter transmission error during the generation of a calibration table in the TRT off-line quantization in the Inference phase. This problem will affect the final quantitative inference precision to some extent.

Paddle Serving

Improved Usability

  • Uses pybind to encapsulate c++ codes. Provids a usage method of the python API. Provides the python2 and python3 environment whl installation packages of paddle_serving_server, paddle_serving_server_gpu, and paddle_serving_client. Releases Version 0.2.1
  • Provides cpu and gpu Docker images in the centos6/7 environment, including executable images and compilable images
  • Provides an API to directly save the models and configuration files required for Serving deployment. Seamlessly connects the Paddle training framework
  • Implements the startup of the model inference service using one line of commands

Function Perfection

  • Provides RPC and HTTP inference service methods
  • Supports Python and Go language clients
  • Supports the A/B test
  • Releases Paddle_serving_app Version 0.0.2. Provides preprocessing APIs for LAC words segmentation preprocessing, Chinese BERT model preprocessing, and image processing
  • Supports the timeline visualization of the inference service

Performance Optimization

  • In RPC service mode, the Chinese BERT semantic vector indicates that the inference speed of the inference service is increased by 2.04 times compared with paddle_gpu_serving Version 0.8.2 under the conditions of a single P4 card and batch size 1.

Documents and Examples

  • Improves and adds Chinese and English operating documents, Chinese and English development and deployment documents, and Chinese performance tuning documents.
  • Provides seven types of model inference service examples, including Chinese word segmentation, English emotion analysis, Chinese semantic vector representation, CTR estimation, image classification, and other fields.

Paddle Lite

Function Upgrade

  • Compilation and Installation
    • Optimization of Paddle-Lite compilation scripts: Splits separate compilation scripts from the Android\iOS\ArmLinux platform to improve the script usability.
    • Support for Python installation: The Paddle-Lite Python library can be installed on PC Linux/Windows/Mac. Python can call the Lite opt optimization model.
    • Support for Windows compilation: Paddle-Lite can be compiled in the Windows environment. Currently, the Windows environment supports only the x86 compilation.
  • Basic Functions
    • Adds the submap segmentation function. For models lited by submap access, manually segments a submap through configuration files so that a specified OP runs in the host to improve the performance (ssd_mobilenet_v1, accelerated by about 4.3 times).
    • Optimizes the support for quantitative models generated using the uncalibrated post-training quantization method. Quantizes common classification models to 8bit. Decreases the precision loss to 0.1% from 2%.
  • Hardware Support
    • Adds RK 1808 NPU to support the fully quantitative MobileNetV1 model.
    • Adds MTK MT8175 APU to support the fully quantitative MobileNetV1 model.
    • Adds a method of access to Baidu Kunlun XPU Kernel to support ERNIE, ResNet-50, and BERT models.
    • Adds Cambricon MLU270 to support the following models: Resnet50 (int8) and Senet101 (int8).
    • Adds Bitmain BM1682 to support the following models: Mobilenet, Yolov3, Mobilenet-ssd, Inceptionv4, Vgg16, DarkNet-YOLOv3, and PyramidBox.
    • Mobile GPU (opencl): Supports mobilenetv1/v2, GAN correlation, mnasnet, sqeueezenet, shufflenet, resnet, unet, and vgg16 models
    • Nvidia GPU: Adds exec multi-stream support. For the model structure with parallelism, the performance is expected to be improved by 5-15%, compared with a single stream. Common visual models, generally have no parallel structure and will obtain no benefit from enabling multi-stream. The multi-streams function config.set_multi_stream(true); is enabled under the cuda platform.
    • Optimization of the x86 platform: Reduces the size of the inference library (200M---->16M), supports LOG shutdown (--shutdown_log=ON), supports the multi-thread sharing model weight parameter by full_api, and adds x86 cxx_demo
    • Huawei NPU:
      • Increase the speed of Benchmark models (mobilenet_v1, mobilenet_v2, squeezenet_v1.1, mnasnet, and shufflenet_v2) by 5-10 times.
      • Supports caching different sizes of NPU models to improve the performance of models with a variable input size.
  • Demo:
    • Adds an Android Demo for real-time mask detection based on camera preview
    • Adds an Android Demo for real-time face key point detection and beauty
    • Adds an Android Demo for Boston house price inference of mobile training

Performance Optimization

  • Reduction in time consumption of InferShape: When the predictor continuously runs, the total time consumption of infershape is reduced to 0.08 ms from 0.25 ms.
  • The kernel of the opencl part supports dynamic shape and transfers partial computation to ReinitWhenNeeded. fc_buffer, elementwise_add, scale, activation, and grid_sampler.
  • Optimizes the sgemm performance on the low-end machine.
  • Optimizes the Precision Profiler function. Optimizes the type setting. Adds the support for the standard deviation and growth rate attributes (a sequence can be compared when the mean value is the same as the standard deviation). Supports the precision printing of output of the OpenCL image/buffer at every layer. Supports writing precision results (final precision summary) at every layer to mobile devices to facilitate APP debugging. Separates precision printing from dependency on the original profiler for time consumption statistics.

Bug Fixes

  • Fix the problem that the predictor results are random because the act_type of the conv op is not initialized.
  • Fix the opencl kernel. The bilinear kernel compatibility problem on the mali gpu, the problem of incorrect computation results of the instance norm, and the kernel registration error of the reshape result in the model transformation failure. The problem that the exp and tanh cannot find the kernel results in kernel name error and model op binding failure.
  • Fix the problem that the opencl gets stuck at the end of the mali gpu's computation.
  • Fix the opencl resource-related problem. Isolates every cl::kernel/cl::program and other resources in every predictor.

PaddleSlim

Quantization

  • Adds a post-training quantization method without calibration data. The int16 precision is lossless. The int8 precision loss is smaller than 0.1%.
  • Enhances the quantization function, improves the output scale information of the quantization OP, and supports the CPU inference-side comprehensive adaptive quantitative model.

Pruning

  • Adds two pruning strategies including FPGM and BN scale. Improves the precision by 0.6% at the same compressibility on the MobileNetV3-YOLOV3-COCO task.
  • Adds a user-defined pruning strategy API to facilitate developers to quickly add compression strategies.
  • Adds the default processing logic of added operators in the pruning function and extends the support for pruning more complex networks.

NAS

  • Adds the DARTS series of search algorithms and provides an extended interface to facilitate users to investigate and implement a new model structure search strategy.
  • Adds an early stop mechanism in the model structure to improve the usability of the search function.
  • Adds a model structure search strategy based on reinforcement learning and provides an extended interface to provide reference for users' investigation and implement of the new strategy.

Pantheon

  • Supports data transmission and storage in fp16 format. Supports knowledge transmission with multiple channels in online distillation mode. Increases the transmission efficiency of knowledge data.
  • Adds lexical analysis examples to facilitate users to build their own distillation tasks based on the examples

Development Kits

PaddleDetection

  • Enhancement of the Richness of Models

    • Adds Efficientdet-D0: The COCO val2017 precision is 0.3 higher than the TF precision (33.8 vs 33.5), excluding the postprocessing inference speed that is basically equal or has a weak advantage (about 13 ms vs about 14 ms, T4 measured speed).
    • Adds the instance segmentation model HTC. The inference speed under the V100 is 11.5 FPS which is 7.4 FPS higher than that of the competing product. The BBox mAP under COCO 2017 is 42.1% and Mask is 37.1.
    • Adds the anchor-free model FCOS: The COCO val2017 precision is 1.1 higher than the pytorch precision (39.8 vs 38.7).
    • Adds a MobileNetV3 backbone network in YOLOv3. The COCO dataset precision is 31.6.
    • Adds the anchor-free model CornernetSqueeze: The COCO val2017 precision is 34.5 which is 0.1 higher than that of the competing product. The precision of the optimized model is 38.2, up 3.7 points. The speed is 5% faster than that of yolo_v3 darknet
    • Adds the server-side practical object detection model cascade_rcnn_resnet50_vd_fpn_dcn. The COCO mAP is 47.8% at 20 FPS on V100, better than that of the competing product EfficientDet.
  • Launch of Three Mobile Models

    • SSDLite series of models: For the ssdlite-mobilenet_v3 large model, mAP on the COCO dataset is 22.8% and the single-thread inference speed on the Qualcomm Snapdragon 845 is 95 ms. For the ssdlite-mobilenet_v3 small model, mAP on the COCO dataset is 16.6%, the single-thread inference speed on the Qualcomm Snapdragon 845 is 40 ms, and the precision is better than that of the competing product. For the ssdlite-mobilenet_v1 model, mAP on the COCO dataset is 23.6%, the single-thread inference speed on the Qualcomm Snapdragon 845 is 140 ms, and the precision is better than that of the competing product.
    • yolo v3: For the yolov3_mobilenet_v3 pruning model, the single-thread inference speed on the Qualcomm Snapdragon 845 is 91 ms and the precision is 24.6 (input dimensions 320 * 320). Both the speed and precision are better than the speed and precision of the framework SSDLite model, a competing product.
    • Faster RCNN: Based on the COCO dataset , mAP of cascade_rcnn_mobilenet_v3 large_fpn is 25.0% and the single-thread inference speed on the Qualcomm Snapdragon 845 is 87 ms at the input image dimensions 320 x 320; mAP is 30.2% and the single-thread inference speed on the Qualcomm Snapdragon 845 is 351 ms at the input image dimensions 640 x 640.
  • Inference Deployment Refactoring:

    • Adds a Python inference deployment process and supports the RCNN, YOLO, SSD, RetinaNet, and face series of models. Supports video inference.
    • Refactors C++ inference deployment and improves the usability.
  • Usability Improvement and Function Components

    • Adds strength of AutoAugment data.
    • Upgrades the PaddleDetection library document structure.
    • Supports migration learning to automatically perform shape matching.
    • Optimizes the memory usage in the mask branch evaluation phase.
    • Upgrades the inference deployment function and adds python scenario image and video inference.

PaddleSeg

  • Adds the Lovasz Loss function to effectively improve the precision of multi-class segmentation

  • Fully upgrades human portrait segmentation series model

    • Releases the first mobile real-time portrait segmentation model HumanSeg-lite
    • Adds a video-level segmentation postprocessing solution based on optical flow algorithm
  • Adds a remote sensing image segmentation solution

    • Adds a data pre-processing solution for multi-channel remote sensing images
    • Adds a data augmentation strategy for multi-channel images
    • Provides a tutorial on the segmentation of two meteorological remote sensing fields including snow detection and cloud detection

PaddleClas

  • Adds the MobileNetV3 series of models and performs performance evaluation on 23 series of and 117 pre-training models.
  • Adds an SSLD knowledge distillation solution, improves the recognition accuracy by more than 3%, and releases six distillation models including resnet50_vd (82.4%) and mobilenetv3 (78.9%).
  • Adds eight data augmentation modes including AutoAugment, RandAugment, CutOutRandErasing, HideAndSeek, GridMask, Mixup, and Cutmix that are used to increase the diversity of training samples and improve the generalization performance of the models.
  • Adds 100,000 types of image classification pre-training models and improves the recognition accuracy rate by up to 30% for image classification service application scenarios.

PaddleOCR

  • Adds DB and EAST text detection algorithms.
  • Adds Rosetta, CRNN, STAR-Net, and RARE text recognition algorithms.
  • Adds an ultra-lightweight OCR model with a total size of only 8.6M (4.1M for text detection and 4.5M for text recognition). Supports text recognition at scenarios such as horizontal and vertical layout, long text, and mixed Chinese and English numbers.

Parakeet

  • Releases English pre-training models and audio samples of WaveFlow (res channel=64/128), ClariNet, WaveNet, and other models.
  • Fixes the problem of too slow speed of the Conv2DTranspose fp16 kernel and simplifies the WaveFlow inference logic in fp16 mode.
  • Increases the model training speed significantly. Doubles the speed in DeepVoice3, TransformerTTS, and other models by optimizing the data preprocessing and OP calculation logic.

Utility Components

PaddleHub

  • Enhances the vision models’ richness. The total number of pre-trained models is 120+.
    • Adds the large-scale vision pre-training models and greatly improves the fine-tune effects of image classification and object detection tasks
    • Adds the industrial short video classification model VideoTag and supports the recognition of more than 3000 types of Chinese tags
    • Adds the lightweight Chinese OCR model and supports one-click quick OCR recognition
    • Adds pedestrian detection, vehicle detection, animal recognition, and Object365 2019 large-scale object detection models which win the first prize on detection contest.
  • Fine-tune API Upgrade
    • Adds five predefined networks for the text classification tasks, including CNN, BOW, LSTM, BiLSTM and DPCNN.

PaddleX

  • Releases PaddleX 1.0, the Entire Process Development Toolkit
  • Opens up the entire process of deep learning development from data access to inference deployment and provides a easy-to-use Python API
  • Covers the four mainstream task scenarios including image classification, object detection, semantic segmentation, and instance segmentation in the CV field. Integrates utility components such as PaddleHub, PaddleSlim and VisualDL.
  • Presets 26 types of a total of 43 models including the industrial practice refining precipitation pre-training model and a number of characteristic and advantageous Paddle models.
  • Provides advanced functions such as automatic data analysis, automatic hyper-parameter recommendation, data argumentation strategy, model pruning training, model quantization, pre-training model saving and reuse, multi-platform deployment, model interpretation and encryption.
  • Innovatively integrates the model explainability analysis function
  • Provides an official implemented GUI and supports one-click installation of Windows, Linux, and Mac systems.

VisualDL

  • Releases VisualDL Version 2.0 beta
  • Upgrades the back-end kernel, is lighter, faster, and more compatible, and supports file storage system expansion
  • Fully upgrades APIs and uses less codes to finish visual analysis, significantly improving the usability
  • Upgrades UI and interaction, provides better localization support, achieves clearer and more intuitive visual analysis, and gives users immersive experience
  • Deeply integrates with Paddle development kits and utility components and provides a smoother deep learning development experience

PaddleFL

  • Releases PaddleFL Version 1.0
    • Open sources federated learning based on mulit-party computation (MPC) to supports horizontal, vertical, and other federated learning scenarios
    • Refactors the original framework to integrate and open source the new and original federated learning solutions
    • Adds the function of converting a single-machine model into an FL trainable program to support more models and scenarios

PGL

  • Releases the industry's first graphical neural network model ERNIESage which combines semantic information with structural information
  • Adds PGL-KE. Currently, PGL covers 25+ graph learning models including walk, messaging, and knowledge embedding
  • Adds graph batch, graph pooling, and other graph operators
  • Fully supports the Open Graph Benchmark benchmark test set and releases the corresponding SOTA
  • Adds MetaPath2Vec++, Mulit-MetaPath2Vec++, STGCN, GIN, and PinSage models in the model zoo

PARL

  • Open sources the industry's first evolutionary learning application framework EvoKit
  • Adds the support for the Multi-Agent RL algorithm including MADDPG
  • Adds the support for multi-card training and releases an example of a multi-card DQN algorithm
  • Open sources SOTA algorithms TD3 and SAC in the continuous control field
  • Open sources the NeurIPS2019 reinforcement learning challenge champion model and training solution

Paddle Quantum (Quantum Computation Laboratory)

  • First release of Paddle Quantum. Paddle Quantum is a quantum machine learning tool set developed based on Baidu Paddle. It supports the setup and training of the quantum neural network and provides easy-to-use quantum machine learning development kits and cutting-edge quantum application tool sets such as quantum optimization and quantum chemistry, making Paddle the first deep learning platform supporting quantum machine learning in China.
    • Supports a QAOA algorithm to solve the max-cut problem
    • Supports a VQE algorithm to calculate the minimum characteristic value of H_2
    • Supports an SSVQE algorithm to calculate the characteristic spectrum of a given Hamiltonian
    • Supports a VQSD algorithm to calculate the diagonalized form of the quantum state and give the eigendecomposition of the quantum state
    • Supports a Gibbs algorithm to generate the Gibbs state of a given Hamiltonian at a certain temperature
    • Supports common functions in quantum computation
    • Supports the description of the U_Ansatz quantum circuit

项目简介

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

🚀 Github 镜像仓库 🚀

源项目地址

https://github.com/paddlepaddle/paddle

发行版本 60

PaddlePaddle 2.5.0 Release Note

全部发行版

贡献者 246

全部贡献者

开发语言

  • C++ 49.8 %
  • Python 41.0 %
  • Cuda 7.0 %
  • CMake 1.1 %
  • Shell 0.6 %