10.md 3.2 KB
Newer Older
D
DrDavidS 已提交
1
# PyTorch:张量 与 Autograd
W
wizardforcel 已提交
2 3

> 原文:<https://pytorch.org/tutorials/beginner/examples_autograd/polynomial_autograd.html#sphx-glr-beginner-examples-autograd-polynomial-autograd-py>
片刻小哥哥's avatar
片刻小哥哥 已提交
4 5
>
> 校对:DrDavidS
W
wizardforcel 已提交
6

D
DrDavidS 已提交
7
这里我们准备一个三阶多项式,通过最小化平方欧几里得距离来训练,并预测函数 `y = sin(x)``-pi``pi`上的值。
W
wizardforcel 已提交
8

D
DrDavidS 已提交
9
此实现使用了 PyTorch 张量(tensor)运算来实现前向传播,并使用 PyTorch Autograd 来计算梯度。
W
wizardforcel 已提交
10

Z
zhangruiyuan 已提交
11
PyTorch 张量表示计算图中的一个节点。 如果`x`是一个张量,且`x.requires_grad=True`,则`x.grad`是另一个张量,它保存了`x`相对于某个标量值的梯度。
W
wizardforcel 已提交
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77

```py
import torch
import math

dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0")  # Uncomment this to run on GPU

# Create Tensors to hold input and outputs.
# By default, requires_grad=False, which indicates that we do not need to
# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)

# Create random Tensors for weights. For a third order polynomial, we need
# 4 weights: y = a + b x + c x^2 + d x^3
# Setting requires_grad=True indicates that we want to compute gradients with
# respect to these Tensors during the backward pass.
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)

learning_rate = 1e-6
for t in range(2000):
    # Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x ** 2 + d * x ** 3

    # Compute and print loss using operations on Tensors.
    # Now loss is a Tensor of shape (1,)
    # loss.item() gets the scalar value held in the loss.
    loss = (y_pred - y).pow(2).sum()
    if t % 100 == 99:
        print(t, loss.item())

    # Use autograd to compute the backward pass. This call will compute the
    # gradient of loss with respect to all Tensors with requires_grad=True.
    # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding
    # the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()

    # Manually update weights using gradient descent. Wrap in torch.no_grad()
    # because weights have requires_grad=True, but we don't need to track this
    # in autograd.
    with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad = None
        b.grad = None
        c.grad = None
        d.grad = None

print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

```

**脚本的总运行时间**:(0 分钟 0.000 秒)

[下载 Python 源码:`polynomial_autograd.py`](https://pytorch.org/tutorials/_downloads/2956e289de4f5fdd59114171805b23d2/polynomial_autograd.py)

[下载 Jupyter 笔记本:`polynomial_autograd.ipynb`](https://pytorch.org/tutorials/_downloads/e1d4d0ca7bd75ea2fff8032fcb79076e/polynomial_autograd.ipynb)