提交 4d87f4cc 编写于 作者: W wizardforcel

dl1 code

上级 1e8c7995
......@@ -12,87 +12,96 @@
* 您可以通过选择它并按下`shift+enter`来运行单元格(您可以按住`shift`并多次按`enter`键以继续按下单元格),或者您可以单击顶部的“运行”按钮。 单元格可以包含代码,文本,图片,视频等。
* Fast.ai需要Python 3
```
%reload_ext autoreload %autoreload 2 %matplotlib inline
```py
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
```
_# This file contains all the main external libs we'll use_ **from** **fastai.imports** **import** *
```py
# This file contains all the main external libs we'll use
from fastai.imports import *
```
```
**from** **fastai.transforms** **import** * **from** **fastai.conv_learner** **import** * **from** **fastai.model** **import** * **from** **fastai.dataset** **import** * **from** **fastai.sgdr** **import** * **from** **fastai.plots** **import** *
```py
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
```
```
PATH = "data/dogscats/" sz=224
```py
PATH = "data/dogscats/"
sz=224
```
**先看图片[** [**15:39**](https://youtu.be/IPBSB1HLNLo%3Ft%3D15m40s) **]**
```
!ls {PATH}
```py
!ls {PATH}
```
```
_models sample test1 tmp train valid_
models sample test1 tmp train valid
```
* `!` 告诉使用bash(shell)而不是python
* 如果您不熟悉训练集和验证集,请查看Practical Machine Learning课程(或阅读[Rachel的博客](http://www.fast.ai/2017/11/13/validation-sets/)
```
!ls {PATH}valid
```py
!ls {PATH}valid
```
```
_cats dogs_
cats dogs
```
```
files = !ls {PATH}valid/cats | head files
```py
files = !ls {PATH}valid/cats | head files
```
```
_['cat.10016.jpg',_ _'cat.1001.jpg',_ _'cat.10026.jpg',_ _'cat.10048.jpg',_ _'cat.10050.jpg',_ _'cat.10064.jpg',_ _'cat.10071.jpg',_ _'cat.10091.jpg',_ _'cat.10103.jpg',_ _'cat.10104.jpg']_
```py
['cat.10016.jpg', 'cat.1001.jpg', 'cat.10026.jpg', 'cat.10048.jpg', 'cat.10050.jpg', 'cat.10064.jpg', 'cat.10071.jpg', 'cat.10091.jpg', 'cat.10103.jpg', 'cat.10104.jpg']
```
* 此文件夹结构是共享和提供图像分类数据集的最常用方法。 每个文件夹都会告诉您标签(例如`dogs``cats` )。
```
img = plt.imread(f' **{PATH}** valid/cats/ **{files[0]}** ') plt.imshow(img);
```py
img = plt.imread(f' {PATH} valid/cats/ {files[0]} ') plt.imshow(img);
```
![](../img/1_Uqy-JLzpyZedFNdpm15N2A.png)
* `f'{PATH}valid/cats/{files[0]}'` - 这是一个Python 3.6。 格式化字符串,可以方便地格式化字符串。
```
img.shape
```py
img.shape
```
```
_(198, 179, 3)_
```py
(198, 179, 3)
```
```
img[:4,:4]
```py
img[:4,:4]
```
```
_array([[[ 29, 20, 23],_ _[ 31, 22, 25],_ _[ 34, 25, 28],_ _[ 37, 28, 31]],_
```py
array([[[ 29, 20, 23], [ 31, 22, 25], [ 34, 25, 28], [ 37, 28, 31]],
```
```
_[[ 60, 51, 54],_ _[ 58, 49, 52],_ _[ 56, 47, 50],_ _[ 55, 46, 49]],_
```py
[[ 60, 51, 54], [ 58, 49, 52], [ 56, 47, 50], [ 55, 46, 49]],
```
```
_[[ 93, 84, 87],_ _[ 89, 80, 83],_ _[ 85, 76, 79],_ _[ 81, 72, 75]],_
```py
[[ 93, 84, 87], [ 89, 80, 83], [ 85, 76, 79], [ 81, 72, 75]],
```
```
_[[104, 95, 98],_ _[103, 94, 97],_ _[102, 93, 96],_ _[102, 93, 96]]], dtype=uint8)_
```py
[[104, 95, 98], [103, 94, 97], [102, 93, 96], [102, 93, 96]]], dtype=uint8)
```
* `img`是一个三维数组(又名3级张量)
......@@ -104,12 +113,14 @@
以下是训练模型所需的三行代码:
```
**data** = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(resnet34, sz)) **learn** = ConvLearner.pretrained(resnet34, data, precompute= **True** ) **learn.fit** (0.01, 3)
```py
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(resnet34, sz))
learn = ConvLearner.pretrained(resnet34, data, precompute= True )
learn.fit (0.01, 3)
```
```
_[ 0\. 0.04955 0.02605 0.98975]_ _[ 1\. 0.03977 0.02916 0.99219]_ _[ 2\. 0.03372 0.02929 0.98975]_
```py
[ 0. 0.04955 0.02605 0.98975] [ 1. 0.03977 0.02916 0.99219] [ 2. 0.03372 0.02929 0.98975]
```
* 这将做3个**时期** ,这意味着它将三次查看整个图像集。
......@@ -131,22 +142,22 @@
这就是验证数据集标签(将其视为正确答案)的样子:
```
data.val_y
```py
data.val_y
```
```
_array([0, 0, 0, ..., 1, 1, 1])_
```py
array([0, 0, 0, ..., 1, 1, 1])
```
这些0和1代表什么?
```
data.classes
```py
data.classes
```
```
_['cats', 'dogs']_
```py
['cats', 'dogs']
```
* `data`包含验证和训练数据
......@@ -154,26 +165,30 @@
让我们对验证集进行预测(预测是以对数比例):
```
log_preds = learn.predict() log_preds.shape
```py
log_preds = learn.predict()
log_preds.shape
```
```
_(2000, 2)_
```py
(2000, 2)
```
```
log_preds[:10]
```py
log_preds[:10]
```
```
_array([[ -0.00002, -11.07446],_ _[ -0.00138, -6.58385],_ _[ -0.00083, -7.09025],_ _[ -0.00029, -8.13645],_ _[ -0.00035, -7.9663 ],_ _[ -0.00029, -8.15125],_ _[ -0.00002, -10.82139],_ _[ -0.00003, -10.33846],_ _[ -0.00323, -5.73731],_ _[ -0.0001 , -9.21326]], dtype=float32)_
```py
array([[ -0.00002, -11.07446], [ -0.00138, -6.58385], [ -0.00083, -7.09025], [ -0.00029, -8.13645], [ -0.00035, -7.9663 ], [ -0.00029, -8.15125], [ -0.00002, -10.82139], [ -0.00003, -10.33846], [ -0.00323, -5.73731], [ -0.0001 , -9.21326]], dtype=float32)
```
* 输出表示猫的预测和狗的预测
```
preds = np.argmax(log_preds, axis=1) _# from log probabilities to 0 or 1_ probs = np.exp(log_preds[:,1]) _# pr(dog)_
```py
preds = np.argmax(logpreds, axis=1)
# from log probabilities to 0 or 1
probs = np.exp(logpreds[:,1])
# pr(dog)
```
* 在PyTorch和Fast.ai中,大多数模型返回预测的对数而不是概率本身(我们将在后面的课程中了解原因)。 现在,只知道要获得概率,你必须做`np.exp()`
......@@ -182,46 +197,49 @@
* 确保你熟悉numpy( `np`
```
_# 1\. A few correct labels at random_ plot_val_with_title(rand_by_correct( **True** ), "Correctly classified")
```py
# 1. A few correct labels at random
plo_tval_with_title(rand_by_correct( **True** ), "Correctly classified")
```
* 图像上方的数字是成为狗的概率
```
_# 2\. A few incorrect labels at random_ plot_val_with_title(rand_by_correct( **False** ), "Incorrectly classified")
```py
# 2. A few incorrect labels at random
plot_val_with_title(rand_by_correct( **False** ), "Incorrectly classified")
```
![](../img/1_ZLhFRuLXqQmFV2uAok84DA.png)
```
plot_val_with_title(most_by_correct(0, **True** ), "Most correct cats")
```py
plot_val_with_title(most_by_correct(0, True ), "Most correct cats")
```
![](../img/1_RxYBmvqixwG4BYNPQGAZ4w.png)
```
plot_val_with_title(most_by_correct(1, **True** ), "Most correct dogs")
```py
plot_val_with_title(most_by_correct(1, True ), "Most correct dogs")
```
![](../img/1_kwUuA3gN-xbNBIUjDBHePg.png)
更有趣的是,这里的模型认为它绝对是一只狗,但结果却是一只猫,反之亦然:
```
plot_val_with_title(most_by_correct(0, **False** ), "Most incorrect cats")
```py
plot_val_with_title(most_by_correct(0, False ), "Most incorrect cats")
```
![](../img/1_gvPAqSdB9IRFmhU4DCk-mg.png)
```
plot_val_with_title(most_by_correct(1, **False** ), "Most incorrect dogs")
```py
plot_val_with_title(most_by_correct(1, False ), "Most incorrect dogs")
```
![](../img/1_jXaTLkWMrvpC8Yz0QfR6LA.png)
```
most_uncertain = np.argsort(np.abs(probs -0.5))[:4] plot_val_with_title(most_uncertain, "Most uncertain predictions")
```py
most_uncertain = np.argsort(np.abs(probs -0.5))[:4]
plot_val_with_title(most_uncertain, "Most uncertain predictions")
```
![](../img/1_wZDDn_XFH-z7libyMUlsBg.png)
......@@ -330,7 +348,7 @@ fast.ai:让学生立即使用神经网络,尽快获得结果
![](../img/1_QindKA4Dt7Ol3CbICMSxWw.png)
<figcaption class="imageCaption" style="width: 269.898%; left: -169.898%;">Sigmoid和ReLU</figcaption>
Sigmoid和ReLU
......@@ -355,22 +373,22 @@ fast.ai:让学生立即使用神经网络,尽快获得结果
#### Dog vs. Cat Revisited - 选择学习率[ [01:11:41](https://youtu.be/IPBSB1HLNLo%3Ft%3D1h11m41s) ]
```
learn.fit(0.01, 3)
```py
learn.fit(0.01, 3)
```
* 第一个数字`0.01`是学习率。
* _学习率_决定了您想要更新_权重_ (或_参数_ )的速度或速度。 学习率是设置最困难的参数之一,因为它会显着影响模型性能。
* 方法`learn.lr_find()`可帮助您找到最佳学习率。 它使用2015年论文“ [循环学习率训练神经网络”中](http://arxiv.org/abs/1506.01186)开发的技术,我们只需将学习率从非常小的值提高,直到损失停止下降。 我们可以绘制不同批次的学习率,看看它是什么样的。
```
learn = ConvLearner.pretrained(arch, data, precompute= **True** ) learn.lr_find()
```py
learn = ConvLearner.pretrained(arch, data, precompute=True) learn.lr_find()
```
我们的`learn`对象包含一个属性`sched` ,其中包含我们的学习率调度程序,并具有一些方便的绘图功能,包括以下内容:
```
learn.sched.plot_lr()
```py
learn.sched.plot_lr()
```
![](../img/1_iGjSbGhX60ZZ3bHbqaIURQ.png)
......@@ -379,8 +397,8 @@ fast.ai:让学生立即使用神经网络,尽快获得结果
我们可以看到损失与学习率的关系,看看我们的损失在哪里停止下降:
```
learn.sched.plot()
```py
learn.sched.plot()
```
![](../img/1_CWF7v1ihFka2QG4RebgqjQ.png)
......@@ -389,8 +407,8 @@ fast.ai:让学生立即使用神经网络,尽快获得结果
#### 选择时代数量[ [1:18:49](https://youtu.be/IPBSB1HLNLo%3Ft%3D1h18m49s) ]
```
_[ 0\. 0.04955 0.02605 0.98975]_ _[ 1\. 0.03977 0.02916 0.99219]_ _[ 2\. 0.03372 0.02929 0.98975]_
```py
[ 0. 0.04955 0.02605 0.98975] [ 1. 0.03977 0.02916 0.99219] [ 2. 0.03372 0.02929 0.98975]
```
* 尽可能多的人,但如果你运行太久,准确性可能会变得更糟。 它被称为“过拟合”,我们稍后会详细了解它。
......
......@@ -15,7 +15,7 @@
![](../img/1_BbxbH3gWu8RHMTuXZlDasA.png)
<figcaption class="imageCaption">作者: [Chloe Sultan](http://forums.fast.ai/u/chloews)</figcaption>
作者: [Chloe Sultan](http://forums.fast.ai/u/chloews)
......@@ -62,7 +62,7 @@ Chloe在这里所做的是她特别关注路径中每个点处张量的维数,
![](../img/1_4MNEdvVzjzHbbtlub98chw.png)
<figcaption class="imageCaption">[http://forums.fast.ai/t/fun-with-lesson8-rotation-adjustment-things-you-can-do-without-annotated-dataset/14261/1](http://forums.fast.ai/t/fun-with-lesson8-rotation-adjustment-things-you-can-do-without-annotated-dataset/14261/1)</figcaption>
[http://forums.fast.ai/t/fun-with-lesson8-rotation-adjustment-things-you-can-do-without-annotated-dataset/14261/1](http://forums.fast.ai/t/fun-with-lesson8-rotation-adjustment-things-you-can-do-without-annotated-dataset/14261/1)
......
......@@ -350,7 +350,7 @@ The reason it's called a ConvTranspose is because it turns out that this is the
![](../img/1_GZz25GtnzqaYy5MV5iQPmA.png)
<figcaption class="imageCaption">[http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html](http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html)</figcaption>
[http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html](http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html)
......@@ -444,7 +444,7 @@ Now we need a training loop [ [1:27:14](https://youtu.be/ondivPiwQho%3Ft%3D1h27m
![](../img/1_VROXSgyt6HWaJiMMY6ogFQ.png)
<figcaption class="imageCaption">For easier reading</figcaption>
For easier reading
......
......@@ -16,7 +16,7 @@
![](../img/1_hJeKM7VaXaDyvVTTXWlFqg.png)
<figcaption class="imageCaption">[https://medium.com/@hortonhearsafoo/adding-a-cutting-edge-deep-learning-training-technique-to-the-fast-ai-library-2cd1dba90a49](https://medium.com/%40hortonhearsafoo/adding-a-cutting-edge-deep-learning-training-technique-to-the-fast-ai-library-2cd1dba90a49)</figcaption>
[https://medium.com/@hortonhearsafoo/adding-a-cutting-edge-deep-learning-training-technique-to-the-fast-ai-library-2cd1dba90a49](https://medium.com/%40hortonhearsafoo/adding-a-cutting-edge-deep-learning-training-technique-to-the-fast-ai-library-2cd1dba90a49)
......@@ -296,7 +296,7 @@ You can't avoid using patented stuff if you write code. I wouldn't be surprised
![](../img/1_GPdF7Xu7mAiUAYEDbT-SHA.png)
<figcaption class="imageCaption">[https://arxiv.org/abs/1508.06576](https://arxiv.org/abs/1508.06576)</figcaption>
[https://arxiv.org/abs/1508.06576](https://arxiv.org/abs/1508.06576)
......
......@@ -212,7 +212,7 @@ Alena Harley做了一些非常有趣的事情,她试图找出如果你只用
![](../img/1_rMC3ob6YdywFeTHcAruD_A.png)
<figcaption class="imageCaption">[https://arxiv.org/abs/1707.02921](https://arxiv.org/abs/1707.02921)</figcaption>
[https://arxiv.org/abs/1707.02921](https://arxiv.org/abs/1707.02921)
......@@ -263,7 +263,7 @@ Alena Harley做了一些非常有趣的事情,她试图找出如果你只用
![](../img/1_afBXEvE8aOwzNjRNb5bt6Q.png)
<figcaption class="imageCaption">[http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html](http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html)</figcaption>
[http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html](http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html)
......@@ -277,7 +277,7 @@ Alena Harley做了一些非常有趣的事情,她试图找出如果你只用
![](../img/1_Yj-niImdJg30IKlk0aBJFQ.png)
<figcaption class="imageCaption">[**使用高效亚像素卷积神经网络的实时单图像和视频超分辨率**](https://arxiv.org/abs/1609.05158)</figcaption>
[**使用高效亚像素卷积神经网络的实时单图像和视频超分辨率**](https://arxiv.org/abs/1609.05158)
......@@ -300,7 +300,7 @@ Alena Harley做了一些非常有趣的事情,她试图找出如果你只用
![](../img/1_GHf4mB-n_o6owwX6MoY_NQ.png)
<figcaption class="imageCaption">[https://arxiv.org/abs/1707.02937](https://arxiv.org/abs/1707.02937)</figcaption>
[https://arxiv.org/abs/1707.02937](https://arxiv.org/abs/1707.02937)
......
......@@ -307,7 +307,7 @@
![](../img/1_0-PpgltveKXnyjaMHPhCPA.png)
<figcaption class="imageCaption">每个品种有多少只狗图像</figcaption>
每个品种有多少只狗图像
......@@ -376,7 +376,7 @@
![](../img/1_KPYOb0uGgAmaqLr6JWZmSg.png)
<figcaption class="imageCaption">行的直方图</figcaption>
行的直方图
......@@ -496,7 +496,7 @@
![](../img/1_IHxFF49erSrWw02s8H6BiQ.png)
<figcaption class="imageCaption">[可视化和理解卷积网络](https://arxiv.org/abs/1311.2901)</figcaption>
[可视化和理解卷积网络](https://arxiv.org/abs/1311.2901)
......
......@@ -59,7 +59,7 @@
![](../img/1_ItxElIWV6hU9f_fwEZ9jMQ.png)
<figcaption class="imageCaption">Quick Dogs v Cats</figcaption>
Quick Dogs v Cats
......@@ -273,7 +273,7 @@
![](../img/1_AUQDWjcwS2Yt7Id0WyXCaQ.png)
<figcaption class="imageCaption">我使用[https://office.live.com/start/Excel.aspx](https://office.live.com/start/Excel.aspx%3Fui%3Den-US%26rs%3DUS)</figcaption>
我使用[https://office.live.com/start/Excel.aspx](https://office.live.com/start/Excel.aspx%3Fui%3Den-US%26rs%3DUS)
......
......@@ -101,7 +101,7 @@
![](../img/1_Xl9If92kjaI5OEIxKyNLiw.png)
<figcaption class="imageCaption">输出MSE</figcaption>
输出MSE
......@@ -320,7 +320,7 @@ _提示:_ `{o:i for i,o in enumerate(u_uniq)}`是一个方便的代码行保
![](../img/1_DS4ZfpUfsseOBayQMqS4Yw.png)
<figcaption class="imageCaption">[链规则](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/differentiating-vector-valued-functions/a/multivariable-chain-rule-simple-version)概述</figcaption>
[链规则](https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/differentiating-vector-valued-functions/a/multivariable-chain-rule-simple-version)概述
......
......@@ -324,7 +324,7 @@ Skip-Gram特定于NLP。 将未标记的问题转变为标记问题的好方法
![](../img/1_VEEVatttQmlWeI98vTO0iA.png)
<figcaption class="imageCaption">`batch_size`维度和激活函数(例如relu,softmax)未在此处显示</figcaption>
`batch_size`维度和激活函数(例如relu,softmax)未在此处显示
......@@ -337,7 +337,7 @@ Skip-Gram特定于NLP。 将未标记的问题转变为标记问题的好方法
![](../img/1_gc1z1R1d5zHkYc75iqSWtw.png)
<figcaption class="imageCaption">层图操作未显示; 记住箭头代表层操作</figcaption>
层图操作未显示; 记住箭头代表层操作
......@@ -426,7 +426,7 @@ Skip-Gram特定于NLP。 将未标记的问题转变为标记问题的好方法
![](../img/1_gBZslK323CITflsnXp-DSA.png)
<figcaption class="imageCaption">[视频[1:27:57]](https://youtu.be/sHcLkfRrgoQ%3Ft%3D1h27m57s)</figcaption>
[视频[1:27:57]](https://youtu.be/sHcLkfRrgoQ%3Ft%3D1h27m57s)
......@@ -496,7 +496,7 @@ Skip-Gram特定于NLP。 将未标记的问题转变为标记问题的好方法
![](../img/1_xF-ab5Hn_3FGZRZtEGwFtw.png)
<figcaption class="imageCaption">使用字符1到n-1预测字符</figcaption>
使用字符1到n-1预测字符
......@@ -611,7 +611,7 @@ PyTorch将自动为我们和线性输入层编写`for`循环。
![](../img/1_0-XkFkCIatPvenvKPfe2_g.png)
<figcaption class="imageCaption">使用字符1到n-1预测字符2到n</figcaption>
使用字符1到n-1预测字符2到n
......
......@@ -19,7 +19,7 @@
![](../img/1_9XXQ3J7G3rD92tFkusi4bA.png)
<figcaption class="imageCaption">标准的全连接网络</figcaption>
标准的全连接网络
......@@ -182,7 +182,7 @@
![](../img/1__29x3zNI1C0vM3fxiIpiVA.png)
<figcaption class="imageCaption">[http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/](http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/)</figcaption>
[http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/](http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/)
......@@ -193,7 +193,7 @@
![](../img/1_qzfburCutJ3p-FYu1T6Q3Q.png)
<figcaption class="imageCaption">[http://colah.github.io/posts/2015-08-Understanding-LSTMs/](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)</figcaption>
[http://colah.github.io/posts/2015-08-Understanding-LSTMs/](http://colah.github.io/posts/2015-08-Understanding-LSTMs/)
......
......@@ -77,13 +77,13 @@ Jeremy喜欢建立模型的方式:
![](../img/1_GyRVknri5gUktxgDmnxo4A.png)
<figcaption class="imageCaption">建立自己的盒子[ [16:50](https://youtu.be/Z0ssNAbe81M%3Ft%3D16m50s) ]</figcaption>
建立自己的盒子[ [16:50](https://youtu.be/Z0ssNAbe81M%3Ft%3D16m50s) ]
![](../img/1__r-uV41M5zUGTdV26N9bsg.png)
<figcaption class="imageCaption">阅读论文[ [21:37](https://youtu.be/Z0ssNAbe81M%3Ft%3D21m37s) ]</figcaption>
阅读论文[ [21:37](https://youtu.be/Z0ssNAbe81M%3Ft%3D21m37s) ]
......@@ -91,13 +91,13 @@ Jeremy喜欢建立模型的方式:
![](../img/1_LBOcbbeBFypTgQ2AOit1EQ.png)
<figcaption class="imageCaption">更多机会[ [25:29](https://youtu.be/Z0ssNAbe81M%3Ft%3D25m29s) ]</figcaption>
更多机会[ [25:29](https://youtu.be/Z0ssNAbe81M%3Ft%3D25m29s) ]
![](../img/1_cNNnbJwImpFbqSKdA5_RIQ.png)
<figcaption class="imageCaption">第2部分的主题[ [30:12](https://youtu.be/Z0ssNAbe81M%3Ft%3D30m12s) ]</figcaption>
第2部分的主题[ [30:12](https://youtu.be/Z0ssNAbe81M%3Ft%3D30m12s) ]
......
......@@ -91,7 +91,7 @@ _分类器_是具有因变量的任何分类或二项式。 与_回归_相反,
![](../img/1_4V4sjFZxn-y2cU9tCJPEUw.png)
<figcaption class="imageCaption">这就是箱子变大的原因</figcaption>
这就是箱子变大的原因
......@@ -445,7 +445,7 @@ _分类器_是具有因变量的任何分类或二项式。 与_回归_相反,
![](../img/1_cCBVbJ2WjiPMlqX4nA2bwA.png)
<figcaption class="imageCaption">3x3卷积,不透明度为15% - 显然盒子的中心有更多的依赖性</figcaption>
3x3卷积,不透明度为15% - 显然盒子的中心有更多的依赖性
......@@ -848,7 +848,7 @@ There are 3 ways to do this:
![](../img/1_QCo0wOgJKXDBYNlmE7zUmA.png)
<figcaption class="imageCaption" style="width: 301.205%; left: -201.205%;">From left (1x1, 2x2, 4x4 grids of anchor boxes). Notice that some of the anchor box is bigger than the original image.</figcaption>
From left (1x1, 2x2, 4x4 grids of anchor boxes). Notice that some of the anchor box is bigger than the original image.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册