diff --git a/Densely Connected Convolutional Networks.pdf b/Densely Connected Convolutional Networks.pdf new file mode 100644 index 0000000000000000000000000000000000000000..20ae6afb91dc1c2a3de86c1dd157357f05afb654 Binary files /dev/null and b/Densely Connected Convolutional Networks.pdf differ diff --git a/README.md b/README.md index 9d572a777ef37cf9202afa4fcc9dfeb3507fbed0..97b3d354a920a3f3940804d11bac9a1eb731a306 100644 --- a/README.md +++ b/README.md @@ -92,4 +92,8 @@ The depth of representations is of central importance for many visual recognitio We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. +## Densely Connected Convolutional Networks + +Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at [this https URL](https://github.com/liuzhuang13/DenseNet) . + ![](https://maoxianxin1996.oss-accelerate.aliyuncs.com/codechina/20210615093836.png) \ No newline at end of file