快捷方式

入門 || 張量 || Autograd || 構建模型 || TensorBoard 支援 || 訓練模型 || 模型理解

使用 PyTorch 進行訓練

建立日期:2021 年 11 月 30 日 | 最後更新:2023 年 5 月 31 日 | 最後驗證:2024 年 11 月 05 日

觀看下面的影片或在 youtube 上觀看。

引言

在過去的影片中,我們討論並演示了

  • 使用 torch.nn 模組的神經網路層和函式構建模型

  • 自動化梯度計算的機制,這是基於梯度的模型訓練的核心

  • 使用 TensorBoard 視覺化訓練進度和其他活動

在本影片中,我們將為你的工具庫新增一些新工具

  • 我們將熟悉資料集(Dataset)和資料載入器(DataLoader)抽象,以及它們如何簡化在訓練迴圈中向模型饋送資料的過程

  • 我們將討論特定的損失函式以及何時使用它們

  • 我們將研究 PyTorch 最佳化器,它們實現了基於損失函式結果調整模型權重的演算法

最後,我們將所有這些組合起來,並看到完整的 PyTorch 訓練迴圈實際執行。

Dataset 和 DataLoader

DatasetDataLoader 類封裝了從儲存中拉取資料並以批次形式將其暴露給訓練迴圈的過程。

Dataset 負責訪問和處理單個數據實例。

DataLoaderDataset 中拉取資料例項(自動或使用你定義的取樣器),將它們收整合批次,並返回供你的訓練迴圈使用。DataLoader 適用於各種資料集,無論其包含的資料型別如何。

對於本教程,我們將使用 TorchVision 提供的 Fashion-MNIST 資料集。我們使用 torchvision.transforms.Normalize() 函式對影像塊內容的分佈進行零中心化和歸一化,並下載訓練和驗證資料拆分。

import torch
import torchvision
import torchvision.transforms as transforms

# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter
from datetime import datetime


transform = transforms.Compose(
    [transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))])

# Create datasets for training & validation, download if necessary
training_set = torchvision.datasets.FashionMNIST('./data', train=True, transform=transform, download=True)
validation_set = torchvision.datasets.FashionMNIST('./data', train=False, transform=transform, download=True)

# Create data loaders for our datasets; shuffle for training, not for validation
training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False)

# Class labels
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
        'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')

# Report split sizes
print('Training set has {} instances'.format(len(training_set)))
print('Validation set has {} instances'.format(len(validation_set)))
  0%|          | 0.00/26.4M [00:00<?, ?B/s]
  0%|          | 65.5k/26.4M [00:00<01:12, 362kB/s]
  1%|          | 229k/26.4M [00:00<00:38, 681kB/s]
  4%|3         | 950k/26.4M [00:00<00:11, 2.18MB/s]
 15%|#4        | 3.83M/26.4M [00:00<00:02, 7.59MB/s]
 36%|###6      | 9.63M/26.4M [00:00<00:01, 16.4MB/s]
 57%|#####6    | 14.9M/26.4M [00:01<00:00, 20.8MB/s]
 77%|#######7  | 20.4M/26.4M [00:01<00:00, 23.8MB/s]
100%|#########9| 26.3M/26.4M [00:01<00:00, 26.6MB/s]
100%|##########| 26.4M/26.4M [00:01<00:00, 18.2MB/s]

  0%|          | 0.00/29.5k [00:00<?, ?B/s]
100%|##########| 29.5k/29.5k [00:00<00:00, 327kB/s]

  0%|          | 0.00/4.42M [00:00<?, ?B/s]
  1%|1         | 65.5k/4.42M [00:00<00:12, 361kB/s]
  5%|5         | 229k/4.42M [00:00<00:06, 678kB/s]
 21%|##        | 918k/4.42M [00:00<00:01, 2.56MB/s]
 44%|####3     | 1.93M/4.42M [00:00<00:00, 4.08MB/s]
100%|##########| 4.42M/4.42M [00:00<00:00, 6.06MB/s]

  0%|          | 0.00/5.15k [00:00<?, ?B/s]
100%|##########| 5.15k/5.15k [00:00<00:00, 58.2MB/s]
Training set has 60000 instances
Validation set has 10000 instances

照例,我們先視覺化資料進行完整性檢查

import matplotlib.pyplot as plt
import numpy as np

# Helper function for inline image display
def matplotlib_imshow(img, one_channel=False):
    if one_channel:
        img = img.mean(dim=0)
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    if one_channel:
        plt.imshow(npimg, cmap="Greys")
    else:
        plt.imshow(np.transpose(npimg, (1, 2, 0)))

dataiter = iter(training_loader)
images, labels = next(dataiter)

# Create a grid from the images and show them
img_grid = torchvision.utils.make_grid(images)
matplotlib_imshow(img_grid, one_channel=True)
print('  '.join(classes[labels[j]] for j in range(4)))
trainingyt
Sandal  Sneaker  Coat  Sneaker

模型

本示例中使用的模型是 LeNet-5 的變體 - 如果你觀看了本系列的先前影片,應該對此很熟悉。

import torch.nn as nn
import torch.nn.functional as F

# PyTorch models inherit from torch.nn.Module
class GarmentClassifier(nn.Module):
    def __init__(self):
        super(GarmentClassifier, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 4 * 4, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 4 * 4)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x


model = GarmentClassifier()

損失函式

在本示例中,我們將使用交叉熵損失。為了演示目的,我們將建立批次虛擬輸出和標籤值,將它們透過損失函式,並檢查結果。

loss_fn = torch.nn.CrossEntropyLoss()

# NB: Loss functions expect data in batches, so we're creating batches of 4
# Represents the model's confidence in each of the 10 classes for a given input
dummy_outputs = torch.rand(4, 10)
# Represents the correct class among the 10 being tested
dummy_labels = torch.tensor([1, 5, 3, 7])

print(dummy_outputs)
print(dummy_labels)

loss = loss_fn(dummy_outputs, dummy_labels)
print('Total loss for this batch: {}'.format(loss.item()))
tensor([[0.7026, 0.1489, 0.0065, 0.6841, 0.4166, 0.3980, 0.9849, 0.6701, 0.4601,
         0.8599],
        [0.7461, 0.3920, 0.9978, 0.0354, 0.9843, 0.0312, 0.5989, 0.2888, 0.8170,
         0.4150],
        [0.8408, 0.5368, 0.0059, 0.8931, 0.3942, 0.7349, 0.5500, 0.0074, 0.0554,
         0.1537],
        [0.7282, 0.8755, 0.3649, 0.4566, 0.8796, 0.2390, 0.9865, 0.7549, 0.9105,
         0.5427]])
tensor([1, 5, 3, 7])
Total loss for this batch: 2.428950071334839

最佳化器

在本示例中,我們將使用簡單的帶有動量的隨機梯度下降

嘗試這種最佳化方案的一些變體可能會有所啟發。

  • 學習率決定了最佳化器採取的步長大小。不同的學習率對你的訓練結果(在準確性和收斂時間方面)有何影響?

  • 動量在多個步驟中推動最佳化器沿著最強梯度的方向前進。改變此值對你的結果有何影響?

  • 嘗試一些不同的最佳化演算法,例如 averaged SGD、Adagrad 或 Adam。你的結果有何不同?

# Optimizers specified in the torch.optim package
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

訓練迴圈

下面是一個執行一個訓練週期(epoch)的函式。它從 DataLoader 中列舉資料,並在迴圈的每次迭代中執行以下操作:

  • 從 DataLoader 獲取一批訓練資料

  • 將最佳化器的梯度歸零

  • 執行推理 - 即,獲取模型對輸入批次的預測結果

  • 計算該組預測結果與資料集標籤之間的損失

  • 計算學習權重上的反向梯度

  • 告訴最佳化器執行一個學習步驟 - 即,根據我們選擇的最佳化演算法,根據該批次觀察到的梯度調整模型的學習權重

  • 它每 1000 個批次報告一次損失。

  • 最後,它報告最後 1000 個批次的平均每批次損失,以便與驗證執行進行比較

def train_one_epoch(epoch_index, tb_writer):
    running_loss = 0.
    last_loss = 0.

    # Here, we use enumerate(training_loader) instead of
    # iter(training_loader) so that we can track the batch
    # index and do some intra-epoch reporting
    for i, data in enumerate(training_loader):
        # Every data instance is an input + label pair
        inputs, labels = data

        # Zero your gradients for every batch!
        optimizer.zero_grad()

        # Make predictions for this batch
        outputs = model(inputs)

        # Compute the loss and its gradients
        loss = loss_fn(outputs, labels)
        loss.backward()

        # Adjust learning weights
        optimizer.step()

        # Gather data and report
        running_loss += loss.item()
        if i % 1000 == 999:
            last_loss = running_loss / 1000 # loss per batch
            print('  batch {} loss: {}'.format(i + 1, last_loss))
            tb_x = epoch_index * len(training_loader) + i + 1
            tb_writer.add_scalar('Loss/train', last_loss, tb_x)
            running_loss = 0.

    return last_loss

每週期活動

每個週期(epoch)我們想做幾件事:

  • 透過檢查模型在未用於訓練的資料集上的相對損失來執行驗證,並報告此結果

  • 儲存模型的一個副本

在這裡,我們將在 TensorBoard 中進行報告。這將需要進入命令列啟動 TensorBoard,並在另一個瀏覽器標籤頁中開啟它。

# Initializing in a separate cell so we can easily add more epochs to the same run
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
writer = SummaryWriter('runs/fashion_trainer_{}'.format(timestamp))
epoch_number = 0

EPOCHS = 5

best_vloss = 1_000_000.

for epoch in range(EPOCHS):
    print('EPOCH {}:'.format(epoch_number + 1))

    # Make sure gradient tracking is on, and do a pass over the data
    model.train(True)
    avg_loss = train_one_epoch(epoch_number, writer)


    running_vloss = 0.0
    # Set the model to evaluation mode, disabling dropout and using population
    # statistics for batch normalization.
    model.eval()

    # Disable gradient computation and reduce memory consumption.
    with torch.no_grad():
        for i, vdata in enumerate(validation_loader):
            vinputs, vlabels = vdata
            voutputs = model(vinputs)
            vloss = loss_fn(voutputs, vlabels)
            running_vloss += vloss

    avg_vloss = running_vloss / (i + 1)
    print('LOSS train {} valid {}'.format(avg_loss, avg_vloss))

    # Log the running loss averaged per batch
    # for both training and validation
    writer.add_scalars('Training vs. Validation Loss',
                    { 'Training' : avg_loss, 'Validation' : avg_vloss },
                    epoch_number + 1)
    writer.flush()

    # Track best performance, and save the model's state
    if avg_vloss < best_vloss:
        best_vloss = avg_vloss
        model_path = 'model_{}_{}'.format(timestamp, epoch_number)
        torch.save(model.state_dict(), model_path)

    epoch_number += 1
EPOCH 1:
  batch 1000 loss: 1.6334228541590274
  batch 2000 loss: 0.8324381597135216
  batch 3000 loss: 0.7350949151031673
  batch 4000 loss: 0.6221513676682953
  batch 5000 loss: 0.6008665340302978
  batch 6000 loss: 0.5533551393696107
  batch 7000 loss: 0.5268192595622968
  batch 8000 loss: 0.4953766325986944
  batch 9000 loss: 0.4763272075761342
  batch 10000 loss: 0.48026260716759134
  batch 11000 loss: 0.4555706014999887
  batch 12000 loss: 0.43150419856602096
  batch 13000 loss: 0.41889463035896185
  batch 14000 loss: 0.4101380754457787
  batch 15000 loss: 0.4188491042831447
LOSS train 0.4188491042831447 valid 0.42083388566970825
EPOCH 2:
  batch 1000 loss: 0.39033183104451746
  batch 2000 loss: 0.35730057470843896
  batch 3000 loss: 0.3797398313785088
  batch 4000 loss: 0.3595128281345387
  batch 5000 loss: 0.3674602470536483
  batch 6000 loss: 0.3695404906652402
  batch 7000 loss: 0.38634192156628705
  batch 8000 loss: 0.37888678515458013
  batch 9000 loss: 0.32936658181797246
  batch 10000 loss: 0.3460305611458316
  batch 11000 loss: 0.355949883276422
  batch 12000 loss: 0.34613123371596155
  batch 13000 loss: 0.3435088261961791
  batch 14000 loss: 0.35190882972519466
  batch 15000 loss: 0.34078337761512373
LOSS train 0.34078337761512373 valid 0.3449384272098541
EPOCH 3:
  batch 1000 loss: 0.3336456001721235
  batch 2000 loss: 0.2948776570415939
  batch 3000 loss: 0.30873254264354183
  batch 4000 loss: 0.3269525112561532
  batch 5000 loss: 0.3081500146031831
  batch 6000 loss: 0.33906219027831686
  batch 7000 loss: 0.3114977335120493
  batch 8000 loss: 0.3028961390093173
  batch 9000 loss: 0.31883212575598735
  batch 10000 loss: 0.3121348040100274
  batch 11000 loss: 0.3204089922408457
  batch 12000 loss: 0.3172754702415841
  batch 13000 loss: 0.3022056705406212
  batch 14000 loss: 0.29925711060611504
  batch 15000 loss: 0.3158802612772852
LOSS train 0.3158802612772852 valid 0.32655972242355347
EPOCH 4:
  batch 1000 loss: 0.2793223039015138
  batch 2000 loss: 0.2759745200898469
  batch 3000 loss: 0.2885438525550344
  batch 4000 loss: 0.29715126178535867
  batch 5000 loss: 0.3092308461628054
  batch 6000 loss: 0.29819886386692085
  batch 7000 loss: 0.28212033420058286
  batch 8000 loss: 0.2652145917697999
  batch 9000 loss: 0.30505836525483027
  batch 10000 loss: 0.28172129570529797
  batch 11000 loss: 0.2760911153540328
  batch 12000 loss: 0.29349113235381813
  batch 13000 loss: 0.28226990548134745
  batch 14000 loss: 0.2974613601177407
  batch 15000 loss: 0.3016561955644138
LOSS train 0.3016561955644138 valid 0.3930961787700653
EPOCH 5:
  batch 1000 loss: 0.2611404411364929
  batch 2000 loss: 0.25894880425418887
  batch 3000 loss: 0.2585991551137176
  batch 4000 loss: 0.2808971864393097
  batch 5000 loss: 0.26857244527151486
  batch 6000 loss: 0.2778763904040534
  batch 7000 loss: 0.2556428771363862
  batch 8000 loss: 0.2892738865161955
  batch 9000 loss: 0.2898595165217885
  batch 10000 loss: 0.24955335284502145
  batch 11000 loss: 0.27326060194405
  batch 12000 loss: 0.2833696024138153
  batch 13000 loss: 0.2705353221144751
  batch 14000 loss: 0.24937306600230658
  batch 15000 loss: 0.27901125454565046
LOSS train 0.27901125454565046 valid 0.3100835084915161

載入模型的一個已儲存版本

saved_model = GarmentClassifier()
saved_model.load_state_dict(torch.load(PATH))

載入模型後,就可以將其用於你需要它做的任何事情了 - 進一步訓練、推理或分析。

請注意,如果你的模型具有影響模型結構的建構函式引數,則需要提供這些引數,並將模型的配置與儲存時的狀態完全一致。

其他資源

  • pytorch.org 上關於資料實用工具的文件,包括 Dataset 和 DataLoader

  • 關於在 GPU 訓練中使用固定記憶體(pinned memory)的說明

  • 關於 TorchVision、TorchText 和 TorchAudio 中可用資料集的文件

  • 關於 PyTorch 中可用損失函式的文件

  • 關於 torch.optim 包的文件,該包包括最佳化器和相關工具,例如學習率排程

  • 關於儲存和載入模型的詳細教程

  • pytorch.org 的教程部分包含各種訓練任務的教程,包括不同領域的分類、生成對抗網路、強化學習等

指令碼總執行時間: ( 3 分鐘 0.715 秒)

由 Sphinx-Gallery 生成的畫廊

文件

訪問 PyTorch 全面的開發者文件

檢視文件

教程

獲取針對初學者和高階開發者的深入教程

檢視教程

資源

查詢開發資源並獲得問題解答

檢視資源