模型是处理输入以生成输出的连接层的集合。你可以使用nn包来定义模型。nn包提供了一般深度学习层的模块集合。nn的一个模块或层接收输入张量,计算输出张量,并获得权重。在PyTorch中,我们可以使用两种方法定义模型:nn.Sequential和 nn.Module。
定义一个线性层
让我们创建一个线性层并且打印输出尺寸
from torch import nn
import torch
# input tensor dimension 64*1000
input_tensor = torch.randn(64, 1000)
# linear layer with 1000 inputs and 100 outputs
linear_layer = nn.Linear(1000, 100)
# output of the linear layer
output = linear_layer(input_tensor)
print(output.size())
# torch.Size([64, 100])
使用nn.Sequential定义模型
我们可以使用nn.Sequential通过按顺序创建层来构建一个深度学习模型。
- 使用nn.Sequential实现模型:
from torch import nn
# define a two-layer model
model = nn.Sequential(
nn.Linear(4, 5),
nn.ReLU(),
nn.Linear(5, 1),
)
print(model)
# Sequential(
# (0): Linear(in_features=4, out_features=5, bias=True)
# (1): ReLU()
# (2): Linear(in_features=5, out_features=1, bias=True)
#)
使用nn.Module定义模型
PyTorch中,使用nn.Module的子类也可以创建模型。首先在类的__init__方法中指定要定义的层,然后在forward方法中,把输入应用于这些层,该方法对于构建定制的模型更灵活。
- 首先,实现类的大概框架
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, x):
pass
2.我们定义__init__函数:
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
3.然后,我们定义forward函数:
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
4.然后,我们将重写这两个类函数,init 和 forward
Net.__init__ = __init__
Net.forward = forward
5.下一步,我们将创建一个Net类对象并且打印模型
model = Net()
print(model)
# Net(
# (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
# (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
# (fc1): Linear(in_features=800, out_features=500, bias=True)
# (fc2): Linear(in_features=500, out_features=10, bias=True)
# )
将模型移到GPU上
一个模型就是参数的集合,模型默认构建在CPU上:
- 获取模型设备
print(next(model.parameters()).device)
# cpu
- 然后,把模型移到GPU上
device = torch.device("cuda:0")
model.to(device)
print(next(model.parameters()).device)
# cuda:0
打印模型摘要
通过打印模型摘要,我们可以获得模型每层的输出形状以及参数量。
- 安装torchsummary包
pip install torchsummary
- 使用torchsummary来获得模型摘要
from torchsummary import summary
summary(model, input_size=(1, 28, 28))
定义损失函数和优化器
损失函数用来计算模型输出与标签之间的距离,也叫做objective function(目标函数)、cost function(代价函数)以及criterion(评判标准)。对于分类问题,一般使用交叉熵损失。
在训练期间,使用optimizer(优化器)来更新模型参数(也称作权重)。PyTorch的optim包提供了各种优化算法,包括SGD以及它的变体Adam, RMSprop等。
定义损失函数
- 首先,定义负对数损失
from torch import nn
loss_func = nn.NLLLoss(reduction="sum")
- 在小批量数据上测试损失函数
train_dl来自于Pytorch基础知识(2)数据的导入与预处理
# train_dl来自于Pytorch基础知识(2)数据的导入与预处理
for xb, yb in train_dl:
# move batch to cuda device
xb = xb.type(torch.float).to(device)
yb = yb.to(device)
# get model output
out = model(xb)
# calculate loss value
loss = loss_func(out, yb)
print(loss.item())
break
# 72.04580688476562
- 计算模型参数的梯度
# compute gradients
loss.backward()
定义优化器
- 定义Adam优化器
from torch import optim
opt = optim.Adam(model.parameters(), lr=1e-4)
- 设置梯度为0
# set gradients to zero
opt.zero_grad()
- 更新模型参数
# update model parameters
opt.step()
训练和评估
- 计算每小批次损失值的函数
def loss_batch(loss_func, xb, yb, yb_h, opt=None):
# obtain loss
loss = loss_func(yb_h, yb)
# obtain performance metric
metric_b = metrics_batch(yb, yb_h)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), metric_b
- 计算每小批次的准确率
def metrics_batch(target, output):
# obtain output class
pred = output.argmax(dim=1, keepdim=True)
# compare output class with target class
corrects = pred.eq(target.view_as(pred)).sum().item()
return corrects
- 计算整个数据上的损失和准确率
def loss_epoch(model, loss_func, dataset_dl, opt=None):
loss = 0.0
metric=0.0
len_data = len(dataset_dl.dataset)
for xb, yb in dataset_dl:
xb = xb.type(torch.float).to(device)
yb = yb.to(device)
# obtain model output
yb_h = model(xb)
loss_b, metric_b = loss_batch(loss_func, xb, yb, yb_h, opt)
loss += loss_b
if metric_b is not None:
metric+=metric_b
loss/=len_data
metric/=len_data
return loss, metric
- 最后,定义train_val函数
def train_val(epochs, model, loss_func, opt, train_dl, val_dl):
for epoch in range(epochs):
model.train()
train_loss, train_metric = loss_epoch(model, loss_func, train_dl, opt)
model.eval()
with torch.no_grad():
val_loss, val_metric= loss_epoch(model, loss_func, val_dl)
accuracy = 100 * val_metric
print("epoch: %d, train loss: %.6f, val loss: %.6f, accuracy: %.2f" % (epoch, train_loss, val_loss, accuracy))
- 训练模型
# call train_val function
num_epochs=5
train_val(num_epochs, model, loss_func, opt, train_dl, val_dl)
获得如下结果:
保存和导入模型
方法一:
- 首先,我们保存模型参数或者字典到文件:
# define path2weights
path2weights="./models/weights.pt"
# store state_dict to file
torch.save(model.state_dict(), path2weights)
- 导入模型参数之前,创建一个模型实例
# define model: weights are randomly initiated
_model = Net()
- 从文件中导入模型参数
weights = torch.load(path2weights)
- 将参数设置到模型中
_model.load_state_dict(weights)
方法二:
- 首先把模型存储到文件中
# define a path2model
path2model = "./models/model.pt"
# store model and weights into a file
torch.save(model, path2model)
- 导入模型
_model=torch.load(path2model)
pytorch模型导入综述:
# Load all tensors onto the CPU
>>> torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
>>> torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
>>> torch.load('tensors.pt', map_location={'cuda:1':'cuda:0'})
# Load tensor from io.BytesIO object
>>> with open('tensor.pt', 'rb') as f:
buffer = io.BytesIO(f.read())
>>> torch.load(buffer)
# Load a module with 'ascii' encoding for unpickling
>>> torch.load('module.pt', encoding='ascii')
根据操作系统自动选择num_worker数值:
num_workers = 0 if sys.platform.startswith('win32') else 4
注意:nvidia-smi查看到的CUDA Version是cuda driver version,也是表明当前显卡驱动支持的最高版本的CUDA版本号。我们自己安装的的CUDA叫做cuda runtime version。