目录:

  • ==学习资料:==
  • ==学习笔记:==
  • *导入包*
  • *使用GPU运行*
  • *定义数据集*
  • *数据实例化*
  • *搭建神经网络*
  • *实例化网络*
  • *定义优化器*
  • *损失函数*
  • *加载模型*
  • *开始训练*
  • *验证模型和保存模型*
  • *可视化*

学习资料:

pytorch学习从入门到不知道哪一步的学习思路[PyTorch 学习笔记] 汇总 - 完结撒花
PyTorch/[PyTorch 学习笔记] 3.1 模型创建步骤与 nn.Module

Pytorch打怪路(一)

pytorch进行CIFAR-10分类(1)CIFAR-10数据加载和处理
PyTorch 深度学习:60分钟快速入门
pytorch常用汇总
莫视频课

学习笔记:

导入包

import torch
import torch.nn as nn
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torchvision import models
import torch.utils.model_zoo as model_zoo
import torch.nn.functional as F
import torch.optim as optim

使用GPU运行

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
encoder = EncoderCNN(args.embed_size).to(device)

定义数据集

class MyDataset(Dataset):
    def __init__(self, data_dir,label_file,transform=None): #保存数据路径
        pass
        
    def __len__(self):
        return len(self.labels)    

    def __getitem__(self,index):
        return image,label

数据实例化

将数据实例化,且用批次读取出来,之后可以直接训练

train_dataset = MyDataset(data_dir=data_dir,label_file=label_file,transform=transforms)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)

搭建神经网络

class MyNet(nn.Module):
	def __init__(self, num_class):
		super(MyNet, self).__init__()
		pass
		
    def forward(self, x):
        return x

实例化网络

model = Mynet(classNum).to(device)

定义优化器

optimizer = optim.Adam(model.parameters(), lr=1e-3)

损失函数

criteon = nn.CrossEntropyLoss().to(device)

也可以用自己设计的损失函数:

Class NewLoss(nn.Module):
    def __init__(self):
        pass

    def forward(self,outputs, targets):
        pass
# 使用        
criterion = NewLoss().to(device)

加载模型

如果有训练好的模型,需要加载:

checkpoints = torch.load(CKPT_PATH)  #是字典型,包含训练次数等
checkpoint = checkpoints['state_dict']
step = checkpoints['epoch']   #训练的批次
model.load_state_dict(checkpoint)
print("=> loaded checkpoint: %s"%CKPT_PATH)

开始训练

for epoch in range(10):        
    model.train()   #必须要写这句
    for batchidx, (x, label) in enumerate(train_loader):
    	x, label = x.to(device), label.to(device)
        logits = model(x)
        loss = criteon(logits, label)        
        # backprop
        optimizer.zero_grad()  #梯度清0
        loss.backward()   #梯度反传
        optimizer.step()   #保留梯度
    print(epoch, 'loss:', loss.item())

验证模型和保存模型

model.eval()    #这句话也是必须的
    with torch.no_grad():
        total_correct = 0
        total_num = 0
        for x, label in val_loader:
            x, label = x.to(device), label.to(device)
            logits = model(x)
            pred = logits.argmax(dim=1)
            correct = torch.eq(pred, label).float().sum().item()
            total_correct += correct
            total_num += x.size(0)
            print(correct)
        acc = total_correct / total_num
        print(epoch, 'test acc:', acc)

        if acc < acc_best:
            acc_best = acc 
            torch.save({'state_dict': model.state_dict(), 'epoch': epoch},'MyNet_'+str(epoch) + '_best.pkl')
            print('Save best statistics done!')

可视化

viz = visdom.Visdom()
viz.line([0], [-1], win='loss', opts=dict(title='loss'))  #初始化
viz.line([0], [-1], win='val_acc', opts=dict(title='val_acc'))

损失函数

optimizer.zero_grad()
loss.backward()
optimizer.step()
viz.line([loss.item()], [global_step], win='loss', update='append') #在这里加入loss值

准确率

viz.line([val_acc],[global_step], win='val_acc',update='append')