提交 944b84c9 编写于 作者: wnma3mz's avatar wnma3mz

add contributor and update files

上级 64d0efe6
# An Introduction to different Types of Convolutions in Deep Learning
Let me give you a quick overview of different types of convolutions and what their benefits are. For the sake of simplicity, I’m focussing on 2D convolutions only.
### Convolutions
First we need to agree on a few parameters that define a convolutional layer.
![img]()
2D convolution using a kernel size of 3, stride of 1 and padding
- **Kernel Size**: The kernel size defines the field of view of the convolution. A common choice for 2D is 3 — that is 3x3 pixels.
- **Stride**: The stride defines the step size of the kernel when traversing the image. While its default is usually 1, we can use a stride of 2 for downsampling an image similar to MaxPooling.
- **Padding**: The padding defines how the border of a sample is handled. A (half) padded convolution will keep the spatial output dimensions equal to the input, whereas unpadded convolutions will crop away some of the borders if the kernel is larger than 1.
- **Input & Output Channels**: A convolutional layer takes a certain number of input channels (I) and calculates a specific number of output channels (O). The needed parameters for such a layer can be calculated by I*O*K, where K equals the number of values in the kernel.
### Dilated Convolutions
(a.k.a. atrous convolutions)
![img]()
2D convolution using a 3 kernel with a dilation rate of 2 and no padding
Dilated convolutions introduce another parameter to convolutional layers called the **dilation rate**. This defines a spacing between the values in a kernel. A 3x3 kernel with a dilation rate of 2 will have the same field of view as a 5x5 kernel, while only using 9 parameters. Imagine taking a 5x5 kernel and deleting every second column and row.
This delivers a wider field of view at the same computational cost. Dilated convolutions are particularly popular in the field of real-time segmentation. Use them if you need a wide field of view and cannot afford multiple convolutions or larger kernels.
### Transposed Convolutions
(a.k.a. deconvolutions or fractionally strided convolutions)
Some sources use the name deconvolution, which is inappropriate because it’s not a deconvolution. To make things worse deconvolutions do exists, but they’re not common in the field of deep learning. An actual deconvolution reverts the process of a convolution. Imagine inputting an image into a single convolutional layer. Now take the output, throw it into a black box and out comes your original image again. This black box does a deconvolution. It is the mathematical inverse of what a convolutional layer does.
A transposed convolution is somewhat similar because it produces the same spatial resolution a hypothetical deconvolutional layer would. However, the actual mathematical operation that’s being performed on the values is different. A transposed convolutional layer carries out a regular convolution but reverts its spatial transformation.
![img]()
2D convolution with no padding, stride of 2 and kernel of 3
At this point you should be pretty confused, so let’s look at a concrete example. An image of 5x5 is fed into a convolutional layer. The stride is set to 2, the padding is deactivated and the kernel is 3x3. This results in a 2x2 image.
If we wanted to reverse this process, we’d need the inverse mathematical operation so that 9 values are generated from each pixel we input. Afterward, we traverse the output image with a stride of 2. This would be a deconvolution.
![img]()
Transposed 2D convolution with no padding, stride of 2 and kernel of 3
A transposed convolution does not do that. The only thing in common is it guarantees that the output will be a 5x5 image as well, while still performing a normal convolution operation. To achieve this, we need to perform some fancy padding on the input.
As you can imagine now, this step will not reverse the process from above. At least not concerning the numeric values.
It merely reconstructs the spatial resolution from before and performs a convolution. This may not be the mathematical inverse, but for Encoder-Decoder architectures, it’s still very helpful. This way we can combine the upscaling of an image with a convolution, instead of doing two separate processes.
### Separable Convolutions
In a separable convolution, we can split the kernel operation into multiple steps. Let’s express a convolution as **y = conv(x, k)** where **y** is the output image, **x** is the input image, and **k** is the kernel. Easy. Next, let’s assume k can be calculated by: **k = k1.dot(k2)**. This would make it a separable convolution because instead of doing a 2D convolution with k, we could get to the same result by doing 2 1D convolutions with k1 and k2.
![img]()
Sobel X and Y filters
Take the Sobel kernel for example, which is often used in image processing. You could get the same kernel by multiplying the vector [1, 0, -1] and [1,2,1].T. This would require 6 instead of 9 parameters while doing the same operation. The example above shows what’s called a **spatial separable convolution**, which to my knowledge isn’t used in deep learning.
*Edit: Actually, one can create something very similar to a spatial separable convolution by stacking a 1xN and a Nx1 kernel layer. This was recently used in an architecture called* [*EffNet*](https://arxiv.org/abs/1801.06434v1) *showing promising results.*
In neural networks, we commonly use something called a **depthwise separable convolution.** This will perform a spatial convolution while keeping the channels separate and then follow with a depthwise convolution. In my opinion, it can be best understood with an example.
Let’s say we have a 3x3 convolutional layer on 16 input channels and 32 output channels. What happens in detail is that every of the 16 channels is traversed by 32 3x3 kernels resulting in 512 (16x32) feature maps. Next, we merge 1 feature map out of every input channel by adding them up. Since we can do that 32 times, we get the 32 output channels we wanted.
For a depthwise separable convolution on the same example, we traverse the 16 channels with 1 3x3 kernel each, giving us 16 feature maps. Now, before merging anything, we traverse these 16 feature maps with 32 1x1 convolutions each and only then start to them add together. This results in 656 (16x3x3 + 16x32x1x1) parameters opposed to the 4608 (16x32x3x3) parameters from above.
The example is a specific implementation of a depthwise separable convolution where the so called **depth multiplier** is 1. This is by far the most common setup for such layers.
We do this because of the hypothesis that spatial and depthwise information can be decoupled. Looking at the performance of the Xception model this theory seems to work. Depthwise separable convolutions are also used for mobile devices because of their efficient use of parameters.
### Questions?
This concludes our little tour through different types of convolutions. I hope it helped to get a brief overview of the matter. Drop a comment if you have any remaining questions and check out [this](https://github.com/vdumoulin/conv_arithmetic) GitHub page for more convolution animations.
\ No newline at end of file
# ML Algorithms addendum: Passive Aggressive Algorithms
Passive Aggressive Algorithms are a family of online learning algorithms (for both classification and regression) proposed by Crammer at al. The idea is very simple and their performance has been proofed to be superior to many other alternative methods like[ Online Perceptron](https://en.wikipedia.org/wiki/Perceptron) and [MIRA](https://en.wikipedia.org/wiki/Margin-infused_relaxed_algorithm) (see the original paper in the reference section).
### Classification
Let’s suppose to have a dataset:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_1.png)
The index t has been chosen to mark the temporal dimension. In this case, in fact, the samples can continue arriving for an indefinite time. Of course, if they are drawn from same data generating distribution, the algorithm will keep learning (probably without large parameter modifications), but if they are drawn from a completely different distribution, the weights will slowly *forget* the previous one and learn the new distribution. For simplicity, we also assume we’re working with a binary classification based on bipolar labels.
Given a weight vector w, the prediction is simply obtained as:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_2.png)
All these algorithms are based on the Hinge loss function (the same used by SVM):
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_11-e1507384552517.png)
The value of L is bounded between 0 (meaning perfect match) and K depending on f(x(t),θ) with K>0 (completely wrong prediction). A Passive-Aggressive algorithm works generically with this update rule:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_4.png)
To understand this rule, let’s assume the slack variable ξ=0 (and L constrained to be 0). If a sample x(t) is presented, the classifier uses the current weight vector to determine the sign. If the sign is correct, the loss function is 0 and the argmin is w(t). This means that the algorithm is **passive** when a correct classification occurs. Let’s now assume that a misclassification occurred:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_5-768x483.png)
The angle θ > 90°, therefore, the dot product is negative and the sample is classified as -1, however, its label is +1. In this case, the update rule becomes very **aggressive**, because it looks for a new w which must be as close as possible as the previous (otherwise the existing knowledge is immediately lost), but it must satisfy L=0 (in other words, the classification must be correct).
The introduction of the slack variable allows to have soft-margins (like in SVM) and a degree of tolerance controlled by the parameter C. In particular, the loss function has to be L <= ξ, allowing a larger error. Higher C values yield stronger aggressiveness (with a consequent higher risk of destabilization in presence of noise), while lower values allow a better adaptation. In fact, this kind of algorithms, when working online, must cope with the presence of noisy samples (with wrong labels). A good robustness is necessary, otherwise, too rapid changes produce consequent higher misclassification rates.
After solving both update conditions, we get the closed-form update rule:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_6.png)
This rule confirms our expectations: the weight vector is updated with a factor whose sign is determined by y(t) and whose magnitude is proportional to the error. Note that if there’s no misclassification the nominator becomes 0, so w(t+1) = w(t), while, in case of misclassification, w will rotate towards x(t) and stops with a loss L <= ξ. In the next figure, the effect has been marked to show the rotation, however, it’s normally as smallest as possible:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_7-768x487.png)
After the rotation, θ < 90° and the dot product becomes negative, so the sample is correctly classified as +1. Scikit-Learn implements Passive Aggressive algorithms, but I preferred to implement the code, just to show how simple they are. In next snippet (also available in this [GIST](https://gist.github.com/giuseppebonaccorso/d700d7bd48b1865990d2f226759686b1)), I first create a dataset, then compute the score with a Logistic Regression and finally apply the PA and measure the final score on a test set:
```python
import numpy as np
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
# Set random seed (for reproducibility)
np.random.seed(1000)
nb_samples = 5000
nb_features = 4
# Create the dataset
X, Y = make_classification(n_samples=nb_samples,
n_features=nb_features,
n_informative=nb_features - 2,
n_redundant=0,
n_repeated=0,
n_classes=2,
n_clusters_per_class=2)
# Split the dataset
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.35, random_state=1000)
# Perform a logistic regression
lr = LogisticRegression()
lr.fit(X_train, Y_train)
print('Logistic Regression score: {}'.format(lr.score(X_test, Y_test)))
# Set the y=0 labels to -1
Y_train[Y_train==0] = -1
Y_test[Y_test==0] = -1
C = 0.01
w = np.zeros((nb_features, 1))
# Implement a Passive Aggressive Classification
for i in range(X_train.shape[0]):
xi = X_train[i].reshape((nb_features, 1))
loss = max(0, 1 - (Y_train[i] * np.dot(w.T, xi)))
tau = loss / (np.power(np.linalg.norm(xi, ord=2), 2) + (1 / (2*C)))
coeff = tau * Y_train[i]
w += coeff * xi
# Compute accuracy
Y_pred = np.sign(np.dot(w.T, X_test.T))
c = np.count_nonzero(Y_pred - Y_test)
print('PA accuracy: {}'.format(1 - float(c) / X_test.shape[0]))
```
### Regression
For regression, the algorithm is very similar, but it’s now based on a slightly different Hinge loss function (called ε-insensitive):
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_8.png)
The parameter ε determines a tolerance for prediction errors. The update conditions are the same adopted for classification problems and the resulting update rule is:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_9-768x141.png)
Just like for classification, Scikit-Learn implements also a Regression, however, in the next snippet (also available in this [GIST](https://gist.github.com/giuseppebonaccorso/d459e15308b4faeb3a63bbbf8a6c9462)), there’s a custom implementation:
```python
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_regression
# Set random seed (for reproducibility)
np.random.seed(1000)
nb_samples = 500
nb_features = 4
# Create the dataset
X, Y = make_regression(n_samples=nb_samples,
n_features=nb_features)
# Implement a Passive Aggressive Regression
C = 0.01
eps = 0.1
w = np.zeros((X.shape[1], 1))
errors = []
for i in range(X.shape[0]):
xi = X[i].reshape((X.shape[1], 1))
yi = np.dot(w.T, xi)
loss = max(0, np.abs(yi - Y[i]) - eps)
tau = loss / (np.power(np.linalg.norm(xi, ord=2), 2) + (1 / (2*C)))
coeff = tau * np.sign(Y[i] - yi)
errors.append(np.abs(Y[i] - yi)[0, 0])
w += coeff * xi
# Show the error plot
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(errors)
ax.set_xlabel('Time')
ax.set_ylabel('Error')
ax.set_title('Passive Aggressive Regression Absolute Error')
ax.grid()
plt.show()
```
The error plot is shown in the following figure:
![img](https://www.bonaccorso.eu/wp-content/uploads/2017/10/mla_paa_10-768x398.png)
The quality of the regression (in particular, the length of the transient period when the error is high) can be controlled by picking better C and ε values. In particular, I suggest checking different range centers for C (100, 10, 1, 0.1, 0.01), in order to determine whether a higher aggressiveness is preferable.
References:
- Crammer K., Dekel O., Keshet J., Shalev-Shwartz S., Singer Y., [Online Passive-Aggressive Algorithms](http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf), Journal of Machine Learning Research 7 (2006) 551–585
See also:
https://www.bonaccorso.eu/2017/08/29/ml-algorithms-addendum-instance-based-learning/
\ No newline at end of file
# ML算法附录:被动攻击算法
原文链接:[ML Algorithms addendum: Passive Aggressive Algorithms](https://www.bonaccorso.eu/2017/10/06/ml-algorithms-addendum-passive-aggressive-algorithms/?from=hackcv&hmsr=hackcv.com&utm_medium=hackcv.com&utm_source=hackcv.com)
被动攻击(Passive Aggressive )算法是Crammer在al提出的一系列在线学习算法(用于分类和回归)。这个想法非常简单,并且它们的性能已被证明优于许多其他替代方法,如[Online Perceptron](https://en.wikipedia.org/wiki/Perceptron)[MIRA](https://en.wikipedia.org/wiki/Margin-infused_relaxed_algorithm)(参见参考部分的原始论文)。
## 分类
......@@ -42,68 +44,68 @@ L的值在0(意味着完全匹配)和K之间取决于f(x(t),θ),
旋转后,θ<90°,点积变为负值,因此样品被正确分类为+1。Scikit-Learn实现了Passive Aggressive算法,但我更喜欢实现代码,只是为了表明它们有多简单。在下一个片段(也在此[GIST中](https://gist.github.com/giuseppebonaccorso/d700d7bd48b1865990d2f226759686b1)可用)中,我首先创建一个数据集,然后使用Logistic回归计算得分,最后应用PA并测量测试集上的最终得分:
```
导入 numpy 为 np
```python
import numpy as np
从 sklearn.datasets 进口 make_classification
从 sklearn.linear_model 进口逻辑回归
从 sklearn.model_selection 进口 train_test_split
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
#设置随机种子(重复性)
np.random.seed( 1000)
# Set random seed (for reproducibility)
np.random.seed(1000)
nb_samples = 5000
nb_features = 4
nb_samples = 5000
nb_features = 4
#创建数据集
X,Y = make_classification( n_samples = nb_samples,
n_features = nb_features,
n_informative = nb_features - 2,
n_redundant = 0,
n_repeated = 0,
n_classes = 2,
n_clusters_per_class = 2)
# Create the dataset
X, Y = make_classification(n_samples=nb_samples,
n_features=nb_features,
n_informative=nb_features - 2,
n_redundant=0,
n_repeated=0,
n_classes=2,
n_clusters_per_class=2)
#拆分数据集
X_train,X_test,Y_train,Y_test = train_test_split(X,Y, test_size = 0.35, random_state = 1000)
# Split the dataset
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.35, random_state=1000)
#执行逻辑回归
lr = LogisticRegression()
lr.fit(X_train,Y_train)
print(' Logistic Regression score:{} '。 format(lr.score(X_test,Y_test)))
# Perform a logistic regression
lr = LogisticRegression()
lr.fit(X_train, Y_train)
print('Logistic Regression score: {}'.format(lr.score(X_test, Y_test)))
#将y = 0标签设置为-1
Y_train [Y_train == 0 ] = - 1
Y_test [Y_test == 0 ] = - 1
# Set the y=0 labels to -1
Y_train[Y_train==0] = -1
Y_test[Y_test==0] = -1
C = 0.01
w = np.zeros((nb_features,1))
C = 0.01
w = np.zeros((nb_features, 1))
#实现一个被动攻击分类
为我在 范围(X_train.shape [ 0 ]):
xi = X_train [i] .reshape((nb_features,1))
# Implement a Passive Aggressive Classification
for i in range(X_train.shape[0]):
xi = X_train[i].reshape((nb_features, 1))
损失= 最大值(0,1 -(Y_train [I] * np.dot(WT,XI)))
tau蛋白=损失/(np.power(np.linalg.norm(XI,ORD = 2),2)+(1 /(2 * C)))
loss = max(0, 1 - (Y_train[i] * np.dot(w.T, xi)))
tau = loss / (np.power(np.linalg.norm(xi, ord=2), 2) + (1 / (2*C)))
coeff = tau * Y_train [i]
w + = coeff * xi
coeff = tau * Y_train[i]
w += coeff * xi
#计算精度
Y_pred = np.sign(np.dot(wT,X_test.T))
c = np.count_nonzero(Y_pred - Y_test)
# Compute accuracy
Y_pred = np.sign(np.dot(w.T, X_test.T))
c = np.count_nonzero(Y_pred - Y_test)
print(' PA accuracy:{} '。 format(1 - float(c)/ X_test.shape [ 0 ]))
print('PA accuracy: {}'.format(1 - float(c) / X_test.shape[0]))
```
......@@ -120,59 +122,59 @@ print(' PA accuracy:{} '。 format(1 - float(c)/ X_test.shape [ 0 ]
就像分类一样,Scikit-Learn也实现了回归,但是,在下一个片段(也可以在这个[GIST中使用](https://gist.github.com/giuseppebonaccorso/d459e15308b4faeb3a63bbbf8a6c9462))中,有一个自定义实现:
```
将 matplotlib.pyplot 导入为 plt
import numpy as np
```python
import matplotlib.pyplot as plt
import numpy as np
来自 sklearn.datasets 导入 make_regression
from sklearn.datasets import make_regression
#设置随机种子(重复性)
np.random.seed( 1000)
# Set random seed (for reproducibility)
np.random.seed(1000)
nb_samples = 500
nb_features = 4
nb_samples = 500
nb_features = 4
#创建数据集
X,Y = make_regression( n_samples = nb_samples,
n_features = nb_features)
# Create the dataset
X, Y = make_regression(n_samples=nb_samples,
n_features=nb_features)
#实现被动积极回归
C = 0.01
eps = 0.1
w = np.zeros((X.shape [ 1 ], 1))
错误= []
# Implement a Passive Aggressive Regression
C = 0.01
eps = 0.1
w = np.zeros((X.shape[1], 1))
errors = []
对于我在 范围(X.shape [ 0 ]):
xi = X [i] .reshape((X.shape [ 1 ],1))
yi = np.dot(wT,xi)
for i in range(X.shape[0]):
xi = X[i].reshape((X.shape[1], 1))
yi = np.dot(w.T, xi)
loss = max(0,np.abs(yi - Y [i])- eps)
loss = max(0, np.abs(yi - Y[i]) - eps)
tau蛋白=损失/(np.power(np.linalg.norm(XI,ORD = 2),2)+(1 /(2 * C)))
tau = loss / (np.power(np.linalg.norm(xi, ord=2), 2) + (1 / (2*C)))
coeff = tau * np.sign(Y [i] - yi)
errors.append(np.abs(值Y [i] - yi)的[ 0,0 ])
coeff = tau * np.sign(Y[i] - yi)
errors.append(np.abs(Y[i] - yi)[0, 0])
w + = coeff * xi
w += coeff * xi
#,显示错误情节
无花果,斧 = plt.subplots( figsize =( 16, 8))
# Show the error plot
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(错误)
ax.set_xlabel('时间')
ax.set_ylabel('错误')
ax.set_title('被动积极回归绝对错误')
ax.grid()
ax.plot(errors)
ax.set_xlabel('Time')
ax.set_ylabel('Error')
ax.set_title('Passive Aggressive Regression Absolute Error')
ax.grid()
plt.show()
plt.show()
```
错误图如下图所示:
......
| 标题 | 简介 |
| ------------------------------------------------------------ | ---- |
| [An Introduction to different Types of Convolutions in Deep Learning](https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d?from=hackcv&hmsr=hackcv.com) | |
| [ML Algorithms addendum: Passive Aggressive Algorithms](https://www.bonaccorso.eu/2017/10/06/ml-algorithms-addendum-passive-aggressive-algorithms/?from=hackcv&hmsr=hackcv.com&utm_medium=hackcv.com&utm_source=hackcv.com) | |
| [我需要知道:H.264](https://blog.piasy.com/2017/09/22/I-Need-Know-About-H264/?from=hackcv&hmsr=hackcv.com&utm_medium=hackcv.com&utm_source=hackcv.com) | |
| 标题 | 简介 |
| ------------------------------------------------------------ | -------------------------------------------------------- |
| [An Introduction to different Types of Convolutions in Deep Learning](https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d?from=hackcv&hmsr=hackcv.com) | 介绍深度学习中各种卷积 |
| [ML Algorithms addendum: Passive Aggressive Algorithms](https://www.bonaccorso.eu/2017/10/06/ml-algorithms-addendum-passive-aggressive-algorithms/?from=hackcv&hmsr=hackcv.com&utm_medium=hackcv.com&utm_source=hackcv.com) | 介绍了以一种名为被动攻击的机器学习算法,可用于分类和回归 |
| [我需要知道:H.264](https://blog.piasy.com/2017/09/22/I-Need-Know-About-H264/?from=hackcv&hmsr=hackcv.com&utm_medium=hackcv.com&utm_source=hackcv.com) | |
# 在深度学习中关于不同卷积的介绍
原文链接:[An Introduction to different Types of Convolutions in Deep Learning](https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d?from=hackcv&hmsr=hackcv.com)
让我为你简单概括不同类型卷积和它们优点所在。为了简单起见。我将把重点仅放在2D卷积上。
## 卷积
......
......@@ -15,26 +15,26 @@
## 翻译贡献者
| 日期 | 翻译 | 校对 |
| --------------------------------------------------- | ---------------------------------------- | ---- |
| [2017/09/25 第1期](https://hackcv.com/daily/p/1/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/04 第2期](https://hackcv.com/daily/p/2/) | [@doordiey](https://github.com/doordiey) | |
| [2017/10/05 第3期](https://hackcv.com/daily/p/3/) | [@Arron206](https://github.com/Arron206) | |
| [2017/10/06 第4期](https://hackcv.com/daily/p/4/) | [@mllove](https://github.com/mllove) | |
| [2017/10/07 第5期](https://hackcv.com/daily/p/5/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/08 第6期](https://hackcv.com/daily/p/6/) | [@doordiey](https://github.com/doordiey) | |
| [2017/10/09 第7期](https://hackcv.com/daily/p/7/) | [@mllove](https://github.com/mllove) | |
| [2017/10/10 第8期](https://hackcv.com/daily/p/8/) | | |
| [2017/10/11 第9期](https://hackcv.com/daily/p/9/) | | |
| [2017/10/11 第10期](https://hackcv.com/daily/p/10/) | | |
| [2017/10/13 第11期](https://hackcv.com/daily/p/11/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/14 第12期](https://hackcv.com/daily/p/12/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/15 第13期](https://hackcv.com/daily/p/13/) | | |
| [2017/10/16 第14期](https://hackcv.com/daily/p/14/) | | |
| [2017/10/17 第15期](https://hackcv.com/daily/p/15/) | | |
| [2017/10/18 第16期](https://hackcv.com/daily/p/16/) | | |
| [2017/10/19 第17期](https://hackcv.com/daily/p/17/) | | |
| [2017/10/20 第18期](https://hackcv.com/daily/p/18/) | | |
| 日期 | 翻译 | 校对 |
| --------------------------------------------------- | ---------------------------------------------- | ---- |
| [2017/09/25 第1期](https://hackcv.com/daily/p/1/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/04 第2期](https://hackcv.com/daily/p/2/) | [@doordiey](https://github.com/doordiey) | |
| [2017/10/05 第3期](https://hackcv.com/daily/p/3/) | [@Arron206](https://github.com/Arron206) | |
| [2017/10/06 第4期](https://hackcv.com/daily/p/4/) | [@mllove](https://github.com/mllove) | |
| [2017/10/07 第5期](https://hackcv.com/daily/p/5/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/08 第6期](https://hackcv.com/daily/p/6/) | [@doordiey](https://github.com/doordiey) | |
| [2017/10/09 第7期](https://hackcv.com/daily/p/7/) | [@mllove](https://github.com/mllove) | |
| [2017/10/10 第8期](https://hackcv.com/daily/p/8/) | [@AlexdanerZe](https://github.com/AlexdanerZe) | |
| [2017/10/11 第9期](https://hackcv.com/daily/p/9/) | | |
| [2017/10/11 第10期](https://hackcv.com/daily/p/10/) | | |
| [2017/10/13 第11期](https://hackcv.com/daily/p/11/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/14 第12期](https://hackcv.com/daily/p/12/) | [@wnma](https://github.com/wnma3mz) | |
| [2017/10/15 第13期](https://hackcv.com/daily/p/13/) | | |
| [2017/10/16 第14期](https://hackcv.com/daily/p/14/) | | |
| [2017/10/17 第15期](https://hackcv.com/daily/p/15/) | | |
| [2017/10/18 第16期](https://hackcv.com/daily/p/16/) | | |
| [2017/10/19 第17期](https://hackcv.com/daily/p/17/) | | |
| [2017/10/20 第18期](https://hackcv.com/daily/p/18/) | | |
## 贡献指南
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册