diff --git a/contrib/ACE2P/README.md b/contrib/ACE2P/README.md index 3dfdcca521acd58c4d859c8d605560e0a0904608..24a884b56bfa3cd289d93ac6fa1822bf0b5e9ccd 100644 --- a/contrib/ACE2P/README.md +++ b/contrib/ACE2P/README.md @@ -37,8 +37,6 @@ ACE2P模型包含三个分支: ![](imgs/result.jpg) -![](ACE2P/imgs/result.jpg) - 人体解析(Human Parsing)是细粒度的语义分割任务,旨在识别像素级别的人类图像的组成部分(例如,身体部位和服装)。本章节使用冠军模型Augmented Context Embedding with Edge Perceiving (ACE2P)进行预测分割。 ## 代码使用说明 @@ -79,11 +77,11 @@ python -u infer.py --example ACE2P 原图: - ![](ACE2P/imgs/117676_2149260.jpg) + ![](imgs/117676_2149260.jpg) 预测结果: - ![](ACE2P/imgs/117676_2149260.png) + ![](imgs/117676_2149260.png) ### 备注 diff --git a/docs/configs/train_group.md b/docs/configs/train_group.md index 2fc8806c457d561978379589f6e05657e62a6e86..96f2f640a2e0c63689420580aeb28c435cb863db 100644 --- a/docs/configs/train_group.md +++ b/docs/configs/train_group.md @@ -45,7 +45,7 @@ TRAIN Group存放所有和训练相关的配置 是否在多卡间同步BN的均值和方差。 Synchronized Batch Norm跨GPU批归一化策略最早在[MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240) -论文中提出,在[Bag of Freebies for Training Object Detection Neural Networks](https://arxiv.org/pdf/1902.04103.pdf)论文中以Yolov3验证了这一策略的有效性,[PaddleCV/yolov3](https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/yolov3)实现了这一系列策略并比Darknet框架版本在COCO17数据上mAP高5.9. +论文中提出,在[Bag of Freebies for Training Object Detection Neural Networks](https://arxiv.org/pdf/1902.04103.pdf)论文中以Yolov3验证了这一策略的有效性。 PaddleSeg基于PaddlePaddle框架的sync_batch_norm策略,可以支持通过多卡实现大batch size的分割模型训练,可以得到更高的mIoU精度。 diff --git a/docs/models.md b/docs/models.md index f02a06c098679fef1aea438d3f99f43810674889..2b4c4991852eaec37c2ad3a9bd3c1f539b16c838 100644 --- a/docs/models.md +++ b/docs/models.md @@ -1,7 +1,7 @@ # PaddleSeg 分割模型介绍 -- [U-Net](#U-Net) -- [DeepLabv3+](#DeepLabv3) +- [U-Net](#U-Net) +- [DeepLabv3+](#DeepLabv3) - [PSPNet](#PSPNet) - [ICNet](#ICNet) - [HRNet](#HRNet) @@ -75,12 +75,10 @@ Fast-SCNN [7] 是一个面向实时的语义分割网络。在双分支的结构 [3] [Pyramid Scene Parsing Network](https://arxiv.org/abs/1612.01105) -[4] [Fully Convolutional Networks for Semantic Segmentation](https://arxiv.org/abs/1605.06211) +[4] [Fully Convolutional Networks for Semantic Segmentation](https://arxiv.org/abs/1411.4038) [5] [ICNet for Real-Time Semantic Segmentation on High-Resolution Images](https://arxiv.org/abs/1704.08545) [6] [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919) [7] [Fast-SCNN: Fast Semantic Segmentation Network](https://arxiv.org/abs/1902.04502) - - diff --git a/docs/usage.md b/docs/usage.md index b07a01ebcb3a9a2527ae60a4105f6fd8410f17f7..2e6540f1bf4d55256a7bd5568ae2edfcf31c366b 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -21,7 +21,7 @@ ## 2.下载待训练数据 -![](../turtorial/imgs/optic.png) +![](../tutorial/imgs/optic.png) 我们提前准备好了一份眼底医疗分割数据集--视盘分割(optic disc segmentation),包含267张训练图片、76张验证图片、38张测试图片。通过以下命令进行下载: