快捷方式

PyTorch: nn

創建於: Dec 03, 2020 | 最後更新於: Jun 14, 2022 | 最後驗證於: Nov 05, 2024

一個三階多項式,訓練用於透過最小化歐幾里得平方距離來預測從 \(-\pi\)\(pi\)\(y=\sin(x)\)

此實現使用 PyTorch 的 nn 包來構建網路。PyTorch 的 autograd 使定義計算圖和計算梯度變得容易,但對於定義複雜的神經網路而言,原始的 autograd 可能有點太底層了;這時 nn 包就能提供幫助。nn 包定義了一組模組 (Modules),您可以將其視為神經網路層,它從輸入產生輸出,並且可能帶有一些可訓練的權重。

99 240.86276245117188
199 167.7965850830078
299 117.86255645751953
399 83.69757843017578
499 60.295021057128906
599 44.246177673339844
699 33.22774887084961
799 25.654367446899414
899 20.443069458007812
999 16.853185653686523
1099 14.377497673034668
1199 12.668401718139648
1299 11.48726749420166
1399 10.670132637023926
1499 10.104259490966797
1599 9.712006568908691
1699 9.439836502075195
1799 9.250818252563477
1899 9.119422912597656
1999 9.028005599975586
Result: y = 0.013691592030227184 + 0.8503277897834778 x + -0.0023620266001671553 x^2 + -0.09241818636655807 x^3

import torch
import math


# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)

# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)

# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)

# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
    torch.nn.Linear(3, 1),
    torch.nn.Flatten(0, 1)
)

# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')

learning_rate = 1e-6
for t in range(2000):

    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(xx)

    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    if t % 100 == 99:
        print(t, loss.item())

    # Zero the gradients before running the backward pass.
    model.zero_grad()

    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()

    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]

# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')

指令碼總執行時間: ( 0 分鐘 0.430 秒)

由 Sphinx-Gallery 生成的畫廊

文件

訪問全面的 PyTorch 開發者文件

檢視文件

教程

獲取面向初學者和高階開發者的深入教程

檢視教程

資源

查詢開發資源並獲得解答

檢視資源