长生栈 长生栈
首页
  • 编程语言

    • C语言
    • C++
    • Java
    • Python
  • 数据结构和算法

    • 全排列算法实现
    • 动态规划算法
  • CMake
  • gitlab 安装和配置
  • docker快速搭建wordpress
  • electron+react开发和部署
  • Electron-创建你的应用程序
  • ImgUI编译环境
  • 搭建图集网站
  • 使用PlantUml画时序图
  • 友情链接
关于
收藏
  • 分类
  • 标签
  • 归档
GitHub (opens new window)

Living Team

编程技术分享
首页
  • 编程语言

    • C语言
    • C++
    • Java
    • Python
  • 数据结构和算法

    • 全排列算法实现
    • 动态规划算法
  • CMake
  • gitlab 安装和配置
  • docker快速搭建wordpress
  • electron+react开发和部署
  • Electron-创建你的应用程序
  • ImgUI编译环境
  • 搭建图集网站
  • 使用PlantUml画时序图
  • 友情链接
关于
收藏
  • 分类
  • 标签
  • 归档
GitHub (opens new window)
  • 初识Python
  • 变量和运算符
  • python之正则表达式
  • 机器学习pytorch虚拟环境搭建
  • AI

    • pytorch-quickstart
      • Working with data
      • Creating Models
      • Optimizing the Model Parameters
      • Saving Models
      • Loading Models
    • pytorch-DATASETS & DATALOADERS
    • pytorch-TENSORS
    • pytorch-BUILD THE NEURAL NETWORK
    • pytorch-OPTIMIZING MODEL PARAMETERS
    • pytorch-SAVE AND LOAD THE MODEL
    • YOLO - You only look once
    • 知识蒸馏
  • Python
  • AI
DC Wang
2022-03-25
目录

pytorch-quickstart

# QUICKSTART

This section runs through the API for common tasks in machine learning. Refer to the links in each section to dive deeper.

# Working with data

PyTorch has two primitives to work with data (opens new window): torch.utils.data.DataLoader and torch.utils.data.Dataset. Dataset stores the samples and their corresponding labels, and DataLoader wraps an iterable around the Dataset.

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
1
2
3
4
5

PyTorch offers domain-specific libraries such as TorchText (opens new window), TorchVision (opens new window), and TorchAudio (opens new window), all of which include datasets. For this tutorial, we will be using a TorchVision dataset.

The torchvision.datasets module contains Dataset objects for many real-world vision data like CIFAR, COCO (full list here (opens new window)). In this tutorial, we use the FashionMNIST dataset. Every TorchVision Dataset includes two arguments: transform and target_transform to modify the samples and labels respectively.

# Download training data from open datasets.
training_data = datasets.FashionMNIST(
    root="data",
    train=True,
    download=True,
    transform=ToTensor(),
)

# Download test data from open datasets.
test_data = datasets.FashionMNIST(
    root="data",
    train=False,
    download=True,
    transform=ToTensor(),
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Out:

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz
Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw

Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz
Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

We pass the Dataset as an argument to DataLoader. This wraps an iterable over our dataset, and supports automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element in the dataloader iterable will return a batch of 64 features and labels.

batch_size = 64

# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)

for X, y in test_dataloader:
    print(f"Shape of X [N, C, H, W]: {X.shape}")
    print(f"Shape of y: {y.shape} {y.dtype}")
    break
1
2
3
4
5
6
7
8
9
10

Out:

Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
1
2

Read more about loading data in PyTorch (opens new window).


# Creating Models

To define a neural network in PyTorch, we create a class that inherits from nn.Module (opens new window). We define the layers of the network in the __init__ function and specify how data will pass through the network in the forward function. To accelerate operations in the neural network, we move it to the GPU if available.

# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")

# Define model
class NeuralNetwork(nn.Module):
    def __init__(self):
        super(NeuralNetwork, self).__init__()
        self.flatten = nn.Flatten()
        self.linear_relu_stack = nn.Sequential(
            nn.Linear(28*28, 512),
            nn.ReLU(),
            nn.Linear(512, 512),
            nn.ReLU(),
            nn.Linear(512, 10)
        )

    def forward(self, x):
        x = self.flatten(x)
        logits = self.linear_relu_stack(x)
        return logits

model = NeuralNetwork().to(device)
print(model)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

Out:

Using cuda device
NeuralNetwork(
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear_relu_stack): Sequential(
    (0): Linear(in_features=784, out_features=512, bias=True)
    (1): ReLU()
    (2): Linear(in_features=512, out_features=512, bias=True)
    (3): ReLU()
    (4): Linear(in_features=512, out_features=10, bias=True)
  )
)
1
2
3
4
5
6
7
8
9
10
11

Read more about building neural networks in PyTorch (opens new window).


# Optimizing the Model Parameters

To train a model, we need a loss function (opens new window) and an optimizer (opens new window).

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
1
2

In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and backpropagates the prediction error to adjust the model’s parameters.

def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)
    model.train()
    for batch, (X, y) in enumerate(dataloader):
        X, y = X.to(device), y.to(device)

        # Compute prediction error
        pred = model(X)
        loss = loss_fn(pred, y)

        # Backpropagation
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if batch % 100 == 0:
            loss, current = loss.item(), batch * len(X)
            print(f"loss: {loss:>7f}  [{current:>5d}/{size:>5d}]")
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

We also check the model’s performance against the test dataset to ensure it is learning.

def test(dataloader, model, loss_fn):
    size = len(dataloader.dataset)
    num_batches = len(dataloader)
    model.eval()
    test_loss, correct = 0, 0
    with torch.no_grad():
        for X, y in dataloader:
            X, y = X.to(device), y.to(device)
            pred = model(X)
            test_loss += loss_fn(pred, y).item()
            correct += (pred.argmax(1) == y).type(torch.float).sum().item()
    test_loss /= num_batches
    correct /= size
    print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
1
2
3
4
5
6
7
8
9
10
11
12
13
14

The training process is conducted over several iterations (epochs). During each epoch, the model learns parameters to make better predictions. We print the model’s accuracy and loss at each epoch; we’d like to see the accuracy increase and the loss decrease with every epoch.

epochs = 5
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train(train_dataloader, model, loss_fn, optimizer)
    test(test_dataloader, model, loss_fn)
print("Done!")
1
2
3
4
5
6

Out:

Epoch 1
-------------------------------
loss: 2.302418  [    0/60000]
loss: 2.292112  [ 6400/60000]
loss: 2.263742  [12800/60000]
loss: 2.261939  [19200/60000]
loss: 2.246309  [25600/60000]
loss: 2.211567  [32000/60000]
loss: 2.222588  [38400/60000]
loss: 2.184552  [44800/60000]
loss: 2.181921  [51200/60000]
loss: 2.149043  [57600/60000]
Test Error:
 Accuracy: 39.3%, Avg loss: 2.141805

Epoch 2
-------------------------------
loss: 2.158010  [    0/60000]
loss: 2.149895  [ 6400/60000]
loss: 2.079702  [12800/60000]
loss: 2.100497  [19200/60000]
loss: 2.038944  [25600/60000]
loss: 1.982337  [32000/60000]
loss: 2.017102  [38400/60000]
loss: 1.928107  [44800/60000]
loss: 1.938422  [51200/60000]
loss: 1.860414  [57600/60000]
Test Error:
 Accuracy: 52.8%, Avg loss: 1.856996

Epoch 3
-------------------------------
loss: 1.900860  [    0/60000]
loss: 1.867335  [ 6400/60000]
loss: 1.738796  [12800/60000]
loss: 1.787248  [19200/60000]
loss: 1.663797  [25600/60000]
loss: 1.628784  [32000/60000]
loss: 1.656449  [38400/60000]
loss: 1.553097  [44800/60000]
loss: 1.582812  [51200/60000]
loss: 1.476982  [57600/60000]
Test Error:
 Accuracy: 60.3%, Avg loss: 1.495178

Epoch 4
-------------------------------
loss: 1.570129  [    0/60000]
loss: 1.536125  [ 6400/60000]
loss: 1.379916  [12800/60000]
loss: 1.455038  [19200/60000]
loss: 1.332352  [25600/60000]
loss: 1.337494  [32000/60000]
loss: 1.350402  [38400/60000]
loss: 1.275247  [44800/60000]
loss: 1.310407  [51200/60000]
loss: 1.212858  [57600/60000]
Test Error:
 Accuracy: 63.3%, Avg loss: 1.241025

Epoch 5
-------------------------------
loss: 1.320390  [    0/60000]
loss: 1.305637  [ 6400/60000]
loss: 1.133990  [12800/60000]
loss: 1.242127  [19200/60000]
loss: 1.116368  [25600/60000]
loss: 1.145456  [32000/60000]
loss: 1.163464  [38400/60000]
loss: 1.101201  [44800/60000]
loss: 1.141440  [51200/60000]
loss: 1.058561  [57600/60000]
Test Error:
 Accuracy: 64.7%, Avg loss: 1.082234

Done!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76

Read more about Training your model (opens new window).


# Saving Models

A common way to save a model is to serialize the internal state dictionary (containing the model parameters).

torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
1
2

Out:

Saved PyTorch Model State to model.pth
1

# Loading Models

The process for loading a model includes re-creating the model structure and loading the state dictionary into it.

model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
1
2

This model can now be used to make predictions.

classes = [
    "T-shirt/top",
    "Trouser",
    "Pullover",
    "Dress",
    "Coat",
    "Sandal",
    "Shirt",
    "Sneaker",
    "Bag",
    "Ankle boot",
]

model.eval()
x, y = test_data[0][0], test_data[0][1]
with torch.no_grad():
    pred = model(x)
    predicted, actual = classes[pred[0].argmax(0)], classes[y]
    print(f'Predicted: "{predicted}", Actual: "{actual}"')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Out:

Predicted: "Ankle boot", Actual: "Ankle boot"
1

Read more about Saving & Loading your model (opens new window).

编辑 (opens new window)
#Python#AI#机器学习
上次更新: 2022/10/03, 09:24:26
机器学习pytorch虚拟环境搭建
pytorch-DATASETS & DATALOADERS

← 机器学习pytorch虚拟环境搭建 pytorch-DATASETS & DATALOADERS→

最近更新
01
ESP32-网络摄像头方案
06-14
02
ESP32-PWM驱动SG90舵机
06-14
03
ESP32-实时操作系统freertos
06-14
更多文章>
Theme by Vdoing | Copyright © 2019-2025 DC Wang All right reserved | 辽公网安备 21021102001125号 | 吉ICP备20001966号-2
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式