猫狗大战是kaggle平台上的一个比赛,用于实现猫和狗的二分类问题。最近在学卷积神经网络,所以自己动手搭建了几层网络进行训练,然后利用迁移学习把别人训练好的模型直接应用于猫狗分类这个数据集,比较一下实验效果。

自己搭建网络

需要用到的库

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt

数据集加载
数据是通过这个网站下载的,也可以自己先下载好。

dataset_url = "https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip"

dataset_path = tf.keras.utils.get_file("cats_and_dogs_filtered.zip", origin=dataset_url, extract=True)
dataset_dir = os.path.join(os.path.dirname(dataset_path), "cats_and_dogs_filtered")

train_cats = os.path.join(dataset_dir,"train","cats")
train_dogs = os.path.join(dataset_dir,"train","dogs")

test_cats = os.path.join(dataset_dir,"validation","cats")
test_dogs = os.path.join(dataset_dir,"validation","dogs")

train_dir = os.path.join(dataset_dir,"train")
test_dir = os.path.join(dataset_dir,"validation")

统计训练集和测试集的大小

train_dogs_num = len(os.listdir(train_dogs))
train_cats_num = len(os.listdir(train_cats))

test_dogs_num = len(os.listdir(test_dogs))
test_cats_num = len(os.listdir(test_cats))

train_all = train_cats_num+train_dogs_num
test_all = test_cats_num+test_dogs_num

设置超参数

batch_size = 128
epochs = 50
height = 150
width = 150

数据预处理
我们所作的预处理包含以下几步:
①读取图像数据。
②对图像内容进行解码并转换成合适的格式。
③对图像进行打散、规定图片大小。
④将数值归一化。

train_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0/255)
test_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0/255)
train_data_gen = train_generator.flow_from_directory(batch_size=batch_size,
                                                     directory=train_dir,
                                                     shuffle=True,
                                                     target_size=(height,width),
                                                     class_mode="binary")
test_data_gen = train_generator.flow_from_directory(batch_size=batch_size,
                                                     directory=test_dir,
                                                     shuffle=True,
                                                     target_size=(height,width),
                                                     class_mode="binary")

构建网络
网络模型为:3层卷积池化层+Dropout+Flatten+两层全连接层

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(16,3,padding="same",activation="relu",input_shape=(height,width,3)),
    tf.keras.layers.MaxPool2D(),
    tf.keras.layers.Conv2D(32,3,padding="same",activation="relu"),
    tf.keras.layers.MaxPool2D(),
    tf.keras.layers.Conv2D(64,3,padding="same",activation="relu"),
    tf.keras.layers.MaxPool2D(),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512,activation="relu"),
    tf.keras.layers.Dense(1)
])
model.compile(optimizer="adam",
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=["acc"])

训练模型

history = model.fit_generator(train_data_gen,
                              steps_per_epoch=train_all//batch_size,
                              epochs=epochs,
                              validation_data=test_data_gen,
                              validation_steps=test_all//batch_size)

训练结果可视化

#训练结果可视化
accuracy = history.history["acc"]
test_accuracy = history.history["val_acc"]
loss = history.history["loss"]
test_loss = history.history["val_loss"]
epochs_range = range(epochs)
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.plot(epochs_range,accuracy,label = "Training Acc")
plt.plot(epochs_range,test_accuracy,label = "Test Acc")
plt.legend()
plt.title("Training and Test Acc")

plt.subplot(1,2,2)
plt.plot(epochs_range,loss,label = "Training loss")
plt.plot(epochs_range,test_loss,label = "Test loss")
plt.legend()
plt.title("Training and Test loss")
plt.show()

自己搭建的网络对猫狗大战的数据进行训练,经过50次的epoch,最终的训练结果是70%左右的正确率,不是很高。
训练结果
其中训练集的模型准确率接近100%,但是测试集的正确率比较低。

Epoch 50/50

 1/15 [=>............................] - ETA: 5s - loss: 0.0089 - acc: 1.0000
 2/15 [===>..........................] - ETA: 3s - loss: 0.0071 - acc: 1.0000
 3/15 [=====>........................] - ETA: 4s - loss: 0.0086 - acc: 1.0000
 4/15 [=======>......................] - ETA: 3s - loss: 0.0113 - acc: 0.9978
 5/15 [=========>....................] - ETA: 3s - loss: 0.0163 - acc: 0.9966
 6/15 [===========>..................] - ETA: 3s - loss: 0.0163 - acc: 0.9958
 7/15 [=============>................] - ETA: 2s - loss: 0.0149 - acc: 0.9965
 8/15 [===============>..............] - ETA: 2s - loss: 0.0135 - acc: 0.9969
 9/15 [=================>............] - ETA: 2s - loss: 0.0139 - acc: 0.9964
10/15 [===================>..........] - ETA: 1s - loss: 0.0145 - acc: 0.9959
11/15 [=====================>........] - ETA: 1s - loss: 0.0139 - acc: 0.9963
12/15 [=======================>......] - ETA: 1s - loss: 0.0155 - acc: 0.9953
13/15 [=========================>....] - ETA: 0s - loss: 0.0155 - acc: 0.9944
14/15 [===========================>..] - ETA: 0s - loss: 0.0170 - acc: 0.9943
15/15 [==============================] - 9s 595ms/step - loss: 0.0174 - acc: 0.9941 - val_loss: 1.2763 - val_acc: 0.7522

tensorflow 猫狗识别 tensorflow猫狗大战_迁移学习


经过分析可得,出现了Overfitting的情况,我们对数据集做些调整。

对原来的数据集,做随机翻转,水平翻转,随机放大操作

train_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0/255,
                                                                  rotation_range=45,#随机翻转
                                                                  width_shift_range=.15,
                                                                  height_shift_range=.15,
                                                                  horizontal_flip=True,#水平翻转
                                                                  zoom_range=0.5#放大操作
                                                                  )

最终的训练结果为:

Epoch 50/50
 1/15 [=>............................] - ETA: 5s - loss: 0.5440 - acc: 0.7031
 2/15 [===>..........................] - ETA: 9s - loss: 0.5151 - acc: 0.7422
 3/15 [=====>........................] - ETA: 10s - loss: 0.5249 - acc: 0.7083
 4/15 [=======>......................] - ETA: 10s - loss: 0.5082 - acc: 0.7227
 5/15 [=========>....................] - ETA: 8s - loss: 0.4787 - acc: 0.7382 
 6/15 [===========>..................] - ETA: 8s - loss: 0.4764 - acc: 0.7417
 7/15 [=============>................] - ETA: 7s - loss: 0.4801 - acc: 0.7406
 8/15 [===============>..............] - ETA: 6s - loss: 0.4759 - acc: 0.7531
 9/15 [=================>............] - ETA: 5s - loss: 0.4805 - acc: 0.7554
10/15 [===================>..........] - ETA: 4s - loss: 0.4877 - acc: 0.7459
11/15 [=====================>........] - ETA: 3s - loss: 0.4944 - acc: 0.7390
12/15 [=======================>......] - ETA: 2s - loss: 0.4969 - acc: 0.7325
13/15 [=========================>....] - ETA: 1s - loss: 0.4939 - acc: 0.7345
14/15 [===========================>..] - ETA: 0s - loss: 0.4933 - acc: 0.7368
15/15 [==============================] - 23s 2s/step - loss: 0.4957 - acc: 0.7377 - val_loss: 0.5394 - val_acc: 0.7277

tensorflow 猫狗识别 tensorflow猫狗大战_tensorflow_02


Overfitting的情况得到了改善,但是准确率没有得到相应的提高。这是经过50个epoch之后的结果。

迁移学习

所谓的迁移学习就是通过别人已经训练好的网络直接对自己的数据进行处理。所采用的网络是别人已经训练好的VGG16网络。

模型加载

#引用VGG16模型
conv_base = tf.keras.applications.VGG16(weights='imagenet',include_top=False)
#设置为不可训练
conv_base.trainable =False
#模型搭建
model = tf.keras.Sequential()
model.add(conv_base)
model.add(tf.keras.layers.GlobalAveragePooling2D())
model.add(tf.keras.layers.Dense(512,activation='relu'))
model.add(tf.keras.layers.Dense(1,activation='sigmoid'))

模型训练

model.compile(optimizer='Adam',
             loss='binary_crossentropy',
             metrics=['acc'])
history = model.fit(train_data_gen,
                    epochs=epochs,
                    steps_per_epoch=train_all//batch_size,
                    validation_data=test_data_gen,
                    validation_steps=test_all//batch_size)

训练结果

Epoch 10/10
 1/15 [=>............................] - ETA: 4:36 - loss: 0.3333 - acc: 0.8438
 2/15 [===>..........................] - ETA: 2:14 - loss: 0.3917 - acc: 0.8125
 3/15 [=====>........................] - ETA: 1:25 - loss: 0.3870 - acc: 0.8155
 4/15 [=======>......................] - ETA: 1:01 - loss: 0.3967 - acc: 0.8082
 5/15 [=========>....................] - ETA: 46s - loss: 0.3994 - acc: 0.8125 
 6/15 [===========>..................] - ETA: 36s - loss: 0.3935 - acc: 0.8139
 7/15 [=============>................] - ETA: 28s - loss: 0.3976 - acc: 0.8160
 8/15 [===============>..............] - ETA: 22s - loss: 0.3948 - acc: 0.8166
 9/15 [=================>............] - ETA: 17s - loss: 0.3958 - acc: 0.8207
10/15 [===================>..........] - ETA: 13s - loss: 0.3970 - acc: 0.8198
11/15 [=====================>........] - ETA: 10s - loss: 0.3934 - acc: 0.8243
12/15 [=======================>......] - ETA: 7s - loss: 0.3946 - acc: 0.8246 
13/15 [=========================>....] - ETA: 4s - loss: 0.3880 - acc: 0.8274
14/15 [===========================>..] - ETA: 2s - loss: 0.3900 - acc: 0.8251
15/15 [==============================] - 48s 3s/step - loss: 0.3910 - acc: 0.8237 - val_loss: 0.4328 - val_acc: 0.8025

因为VGG16模型训练起来比较耗时,所以我只设置了10个epoch,但是最终的结果已经比自己搭建的网络好很多了。

tensorflow 猫狗识别 tensorflow猫狗大战_tensorflow_03


没有出现过拟合的情况(数据集经过预处理了),而且在epoch只有10的情况下,正确率已经达到了80%。

总结

通过对比我们可以发现, 自己搭建的网络在模型准确率上面,不如迁移学习所使用的网络模型,有可能是我的网络泛化能力比较差的问题。路过的大佬如果有更好的网络模型,可以讨论一下。