12.md 3.7 KB
Newer Older
D
DrDavidS 已提交
1
# PYTHORCH: NN
W
wizardforcel 已提交
2 3

> 原文:<https://pytorch.org/tutorials/beginner/examples_nn/polynomial_nn.html#sphx-glr-beginner-examples-nn-polynomial-nn-py>
片刻小哥哥's avatar
片刻小哥哥 已提交
4 5
>
> 校对:DrDavidS
W
wizardforcel 已提交
6

D
DrDavidS 已提交
7
一个三阶多项式,通过最小化平方欧几里得距离来训练,并预测函数 `y = sin(x)``-pi``pi`上的值。
W
wizardforcel 已提交
8

D
DrDavidS 已提交
9 10 11 12
这个实现使用 PyTorch 的`nn`包来构建神经网络。
PyTorch Autograd 让我们定义计算图和计算梯度变得容易了,但是原始的 Autograd 对于定义复杂的神经网络来说可能太底层了。
这时候`nn`包就能帮上忙。
`nn`包定义了一组模块,你可以把它视作一层神经网络,该神经网络层接受输入,产生输出,并且可能有一些可训练的权重。
W
wizardforcel 已提交
13

D
DrDavidS 已提交
14
```python
W
wizardforcel 已提交
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87
import torch
import math

# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for t in range(2000):

    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(xx)

    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Zero the gradients before running the backward pass.
    model.zero_grad()

    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()

    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]

# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')

```

[下载 Python 源码:`polynomial_nn.py`](https://pytorch.org/tutorials/_downloads/b4767df4367deade63dc8a0d3712c1d4/polynomial_nn.py)

D
DrDavidS 已提交
88
[下载 Jupyter 笔记本:`polynomial_nn.ipynb`](https://pytorch.org/tutorials/_downloads/7bc167d8b8308ae65a717d7461d838fa/polynomial_nn.ipynb)