用Pytorch实现逻辑回归
Logistic Regression
从线性回归 → 逻辑回归
1、分类问题
计算属于每一类的概率
用 Logistic Function 把实数空间映射到[0,1]的概率范围空间内
2、模型变化(线性回归 → 逻辑回归)
2.1、模型结构变化
2.2、Loss Function的变化
为了计算两个概率之间的差异,需要利用到交叉熵的理论。
BCELoss二分类的交叉熵
y_pred and y_data越接近,BCE Loss越小
3、完整代码
代码框架仍然分为4大部分
import torch
import torch.nn.functional as F
## Prepare Dataset:mini-batch, X、Y是3X1的Tensor
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[0], [0], [1]])
##Design Model
##构造类,继承torch.nn.Module类
class LogisticRegressionModel(torch.nn.Module):
## 构造函数,初始化对象
def __init__(self):
##super调用父类
super(LogisticRegressionModel, self).__init__()
##构造对象,Linear Unite,包含两个Tensor:weight和bias,参数(1, 1)是w的维度
self.linear = torch.nn.Linear(1, 1)
## 构造函数,前馈运算
def forward(self, x):
## sigmoid(w*x+b)
y_pred = F.sigmoid(self.linear(x))
return y_pred
model = LogisticRegressionModel()
##Construct Loss and Optimizer
##损失函数,改为BCELoss
criterion = torch.nn.BCELoss(size_average = False)
##优化器,model.parameters()找出模型所有的参数,Lr--学习率
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
## Training cycle
for epoch in range(100):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(epoch, loss)
##梯度归零
optimizer.zero_grad()
##反向传播
loss.backward()
##更新
optimizer.step()
## Outpue weigh and bias
print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())
## Test Model
x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)