1、安装库

各位小伙伴们,大家好,今天就让我们一起来看一下使用python实现深度学习中的手写数字识别,首先咱们需要安装几个库文件,numpy库、matplotlib库和tensorflow库。可以打开命令行进行安装,也可以再PyCharm下的命令行安装,建议在PyCharm下的命令行进行安装,因为我有许多同学在cmd控制台安装的时候,会报许多的错误。其实在PyCharm中安装实际上在一个虚拟环境下安装,只是安装在你的本次工程目录下,当你新建一个工程项目的时候,有需要重新安装。

下面就让我们来看下基本的安装的命令吧:

pip install numpy
pip install matplotlib
pip install tensorflow

注意:如果上面的命令安装失败或者下载速度非常的慢,请使用的下面的命令,速度会从马车转到高铁,其实质是换了一个镜像,也就是下载路径。

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple numpy
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple matplotlib
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple tensorflow

如果上面的下载还遇到了问题的话,建议再换一下下面的镜像试试:

pip指定软件源安装命令格式:pip install -i [ source_url ] [ package_name ]
清华: https://pypi.tuna.tsinghua.edu.cn/simple
豆瓣: http://pypi.douban.com/simple/
阿里: http://mirrors.aliyun.com/pypi/simple/
百度:https://mirror.baidu.com/pypi/simple

2、前馈神经网络识别手写数字

我们都知道,前馈神经网络对于每一个神经元来说,是全连接的,所以当神经元深度很深的时候,参数会变得非常多。话不多说,我们先来尝试运行一下试试。

"""
@author:不败顽童
purpose:使用前馈神经网络识别手写从0~9的数字识别
"""

from keras.utils import to_categorical
from keras import models, layers, regularizers
from keras.optimizers import RMSprop
from keras.datasets import mnist
import matplotlib.pyplot as plt

# 加载数据集
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

train_images = train_images.reshape((60000, 28 * 28)).astype('float')
test_images = test_images.reshape((10000, 28 * 28)).astype('float')
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

network = models.Sequential()
network.add(layers.Dense(units=128, activation='relu', input_shape=(28 * 28,),
                         kernel_regularizer=regularizers.l1(0.0001)))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(units=32, activation='relu',
                         kernel_regularizer=regularizers.l1(0.0001)))
network.add(layers.Dropout(0.1))
network.add(layers.Dense(units=10, activation='softmax'))

# 编译步骤
network.compile(optimizer=RMSprop(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])

# 训练网络,用fit函数, epochs表示训练多少个回合, batch_size表示每次训练给多大的数据
history = network.fit(train_images, train_labels, epochs=200,
                      batch_size=128,
                      verbose=2)

# 来在测试集上测试一下模型的性能吧
test_loss, test_accuracy = network.evaluate(test_images, test_labels)
print("test_loss:", test_loss, "    test_accuracy:", test_accuracy)

#  使用history将训练集和测试集的loss和acc调出来
acc = history.history['accuracy']  # 训练集准确率
loss = history.history['loss']  # 训练集损失

plt.plot(acc, label='Training Accuracy')
plt.plot(loss, label='Training Loss')
plt.title('Training Accuracy and loss')
plt.legend()
plt.show()

大家不要慌,首次运行的时候,可能会比较慢,因为首次运行,程序会自动帮你下载数据集。等到运行一段时间时候,电脑的风扇会比较响,这个时候说明你的电脑开始运行这个程序了,当你电脑的风扇声音变小的时候,说明程序运行结束了。

下面我们来看下运行结果怎么样:

控制台输出的命令比较多,我们可以再最下面再见,哈哈哈:

D:\develop\PythonProject\venv\Scripts\python.exe D:/develop/PythonProject/DeepLearningWork/hand_write_recognition02/hand_write_recognition_of_NN.py
2021-04-27 09:31:28.446446: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-04-27 09:31:28.446607: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-04-27 09:32:06.106829: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-27 09:32:06.534560: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-04-27 09:32:07.573305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1050 computeCapability: 6.1
coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s
2021-04-27 09:32:07.604093: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-04-27 09:32:07.616122: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2021-04-27 09:32:07.628975: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found
2021-04-27 09:32:07.642939: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found
2021-04-27 09:32:07.655452: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found
2021-04-27 09:32:07.666875: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found
2021-04-27 09:32:07.677202: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found
2021-04-27 09:32:07.687321: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2021-04-27 09:32:07.687506: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-04-27 09:32:07.745876: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-04-27 09:32:07.799900: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-27 09:32:07.800018: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      
2021-04-27 09:32:07.800092: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-27 09:32:11.162556: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
Epoch 1/200
469/469 - 5s - loss: 3.6118 - accuracy: 0.4792
Epoch 2/200
469/469 - 1s - loss: 1.3483 - accuracy: 0.6499
Epoch 3/200
469/469 - 1s - loss: 1.0753 - accuracy: 0.7307
Epoch 4/200
469/469 - 1s - loss: 0.7956 - accuracy: 0.8271
Epoch 5/200
469/469 - 1s - loss: 0.6231 - accuracy: 0.8825
Epoch 6/200
469/469 - 1s - loss: 0.5228 - accuracy: 0.9154
Epoch 7/200
469/469 - 1s - loss: 0.4439 - accuracy: 0.9307
Epoch 8/200
469/469 - 1s - loss: 0.4178 - accuracy: 0.9351
Epoch 9/200
469/469 - 1s - loss: 0.3870 - accuracy: 0.9392
Epoch 10/200
469/469 - 1s - loss: 0.3691 - accuracy: 0.9434
Epoch 11/200
469/469 - 1s - loss: 0.3650 - accuracy: 0.9461
Epoch 12/200
469/469 - 1s - loss: 0.3529 - accuracy: 0.9466
Epoch 13/200
469/469 - 1s - loss: 0.3386 - accuracy: 0.9508
Epoch 14/200
469/469 - 1s - loss: 0.3296 - accuracy: 0.9490
Epoch 15/200
469/469 - 1s - loss: 0.3229 - accuracy: 0.9504
Epoch 16/200
469/469 - 1s - loss: 0.3161 - accuracy: 0.9518
Epoch 17/200
469/469 - 1s - loss: 0.3071 - accuracy: 0.9530
Epoch 18/200
469/469 - 1s - loss: 0.3043 - accuracy: 0.9542
Epoch 19/200
469/469 - 1s - loss: 0.2935 - accuracy: 0.9553
Epoch 20/200
469/469 - 1s - loss: 0.2853 - accuracy: 0.9560
Epoch 21/200
469/469 - 1s - loss: 0.2842 - accuracy: 0.9573
Epoch 22/200
469/469 - 1s - loss: 0.2851 - accuracy: 0.9565
Epoch 23/200
469/469 - 1s - loss: 0.2817 - accuracy: 0.9563
Epoch 24/200
469/469 - 1s - loss: 0.2758 - accuracy: 0.9579
Epoch 25/200
469/469 - 1s - loss: 0.2692 - accuracy: 0.9589
Epoch 26/200
469/469 - 1s - loss: 0.2717 - accuracy: 0.9581
Epoch 27/200
469/469 - 1s - loss: 0.2689 - accuracy: 0.9596
Epoch 28/200
469/469 - 1s - loss: 0.2612 - accuracy: 0.9586
Epoch 29/200
469/469 - 1s - loss: 0.2577 - accuracy: 0.9606
Epoch 30/200
469/469 - 1s - loss: 0.2621 - accuracy: 0.9598
Epoch 31/200
469/469 - 1s - loss: 0.2576 - accuracy: 0.9606
Epoch 32/200
469/469 - 1s - loss: 0.2533 - accuracy: 0.9605
Epoch 33/200
469/469 - 1s - loss: 0.2556 - accuracy: 0.9607
Epoch 34/200
469/469 - 1s - loss: 0.2557 - accuracy: 0.9614
Epoch 35/200
469/469 - 1s - loss: 0.2510 - accuracy: 0.9611
Epoch 36/200
469/469 - 1s - loss: 0.2484 - accuracy: 0.9616
Epoch 37/200
469/469 - 1s - loss: 0.2484 - accuracy: 0.9622
Epoch 38/200
469/469 - 1s - loss: 0.2441 - accuracy: 0.9623
Epoch 39/200
469/469 - 1s - loss: 0.2432 - accuracy: 0.9625
Epoch 40/200
469/469 - 1s - loss: 0.2411 - accuracy: 0.9621
Epoch 41/200
469/469 - 1s - loss: 0.2445 - accuracy: 0.9614
Epoch 42/200
469/469 - 1s - loss: 0.2412 - accuracy: 0.9632
Epoch 43/200
469/469 - 1s - loss: 0.2432 - accuracy: 0.9614
Epoch 44/200
469/469 - 1s - loss: 0.2452 - accuracy: 0.9628
Epoch 45/200
469/469 - 1s - loss: 0.2462 - accuracy: 0.9617
Epoch 46/200
469/469 - 1s - loss: 0.2394 - accuracy: 0.9629
Epoch 47/200
469/469 - 1s - loss: 0.2418 - accuracy: 0.9633
Epoch 48/200
469/469 - 1s - loss: 0.2394 - accuracy: 0.9625
Epoch 49/200
469/469 - 1s - loss: 0.2381 - accuracy: 0.9632
Epoch 50/200
469/469 - 1s - loss: 0.2339 - accuracy: 0.9641
Epoch 51/200
469/469 - 1s - loss: 0.2364 - accuracy: 0.9625
Epoch 52/200
469/469 - 1s - loss: 0.2322 - accuracy: 0.9636
Epoch 53/200
469/469 - 1s - loss: 0.2343 - accuracy: 0.9636
Epoch 54/200
469/469 - 1s - loss: 0.2326 - accuracy: 0.9636
Epoch 55/200
469/469 - 1s - loss: 0.2340 - accuracy: 0.9638
Epoch 56/200
469/469 - 1s - loss: 0.2290 - accuracy: 0.9650
Epoch 57/200
469/469 - 1s - loss: 0.2299 - accuracy: 0.9638
Epoch 58/200
469/469 - 1s - loss: 0.2349 - accuracy: 0.9637
Epoch 59/200
469/469 - 1s - loss: 0.2310 - accuracy: 0.9640
Epoch 60/200
469/469 - 1s - loss: 0.2341 - accuracy: 0.9647
Epoch 61/200
469/469 - 1s - loss: 0.2390 - accuracy: 0.9622
Epoch 62/200
469/469 - 1s - loss: 0.2199 - accuracy: 0.9670
Epoch 63/200
469/469 - 1s - loss: 0.2292 - accuracy: 0.9648
Epoch 64/200
469/469 - 1s - loss: 0.2315 - accuracy: 0.9642
Epoch 65/200
469/469 - 1s - loss: 0.2274 - accuracy: 0.9658
Epoch 66/200
469/469 - 1s - loss: 0.2261 - accuracy: 0.9640
Epoch 67/200
469/469 - 1s - loss: 0.2265 - accuracy: 0.9649
Epoch 68/200
469/469 - 1s - loss: 0.2280 - accuracy: 0.9645
Epoch 69/200
469/469 - 1s - loss: 0.2307 - accuracy: 0.9633
Epoch 70/200
469/469 - 1s - loss: 0.2268 - accuracy: 0.9653
Epoch 71/200
469/469 - 1s - loss: 0.2268 - accuracy: 0.9639
Epoch 72/200
469/469 - 1s - loss: 0.2313 - accuracy: 0.9645
Epoch 73/200
469/469 - 1s - loss: 0.2280 - accuracy: 0.9645
Epoch 74/200
469/469 - 1s - loss: 0.2320 - accuracy: 0.9643
Epoch 75/200
469/469 - 1s - loss: 0.2299 - accuracy: 0.9643
Epoch 76/200
469/469 - 1s - loss: 0.2202 - accuracy: 0.9652
Epoch 77/200
469/469 - 1s - loss: 0.2277 - accuracy: 0.9652
Epoch 78/200
469/469 - 1s - loss: 0.2294 - accuracy: 0.9645
Epoch 79/200
469/469 - 1s - loss: 0.2280 - accuracy: 0.9654
Epoch 80/200
469/469 - 1s - loss: 0.2260 - accuracy: 0.9647
Epoch 81/200
469/469 - 1s - loss: 0.2256 - accuracy: 0.9656
Epoch 82/200
469/469 - 1s - loss: 0.2233 - accuracy: 0.9657
Epoch 83/200
469/469 - 1s - loss: 0.2252 - accuracy: 0.9655
Epoch 84/200
469/469 - 1s - loss: 0.2192 - accuracy: 0.9664
Epoch 85/200
469/469 - 1s - loss: 0.2272 - accuracy: 0.9655
Epoch 86/200
469/469 - 1s - loss: 0.2259 - accuracy: 0.9644
Epoch 87/200
469/469 - 1s - loss: 0.2207 - accuracy: 0.9661
Epoch 88/200
469/469 - 1s - loss: 0.2247 - accuracy: 0.9652
Epoch 89/200
469/469 - 1s - loss: 0.2270 - accuracy: 0.9647
Epoch 90/200
469/469 - 1s - loss: 0.2246 - accuracy: 0.9663
Epoch 91/200
469/469 - 1s - loss: 0.2248 - accuracy: 0.9656
Epoch 92/200
469/469 - 1s - loss: 0.2232 - accuracy: 0.9655
Epoch 93/200
469/469 - 1s - loss: 0.2184 - accuracy: 0.9664
Epoch 94/200
469/469 - 1s - loss: 0.2237 - accuracy: 0.9640
Epoch 95/200
469/469 - 1s - loss: 0.2201 - accuracy: 0.9665
Epoch 96/200
469/469 - 1s - loss: 0.2210 - accuracy: 0.9666
Epoch 97/200
469/469 - 1s - loss: 0.2289 - accuracy: 0.9641
Epoch 98/200
469/469 - 1s - loss: 0.2196 - accuracy: 0.9654
Epoch 99/200
469/469 - 1s - loss: 0.2215 - accuracy: 0.9648
Epoch 100/200
469/469 - 1s - loss: 0.2220 - accuracy: 0.9650
Epoch 101/200
469/469 - 1s - loss: 0.2172 - accuracy: 0.9666
Epoch 102/200
469/469 - 1s - loss: 0.2234 - accuracy: 0.9653
Epoch 103/200
469/469 - 1s - loss: 0.2203 - accuracy: 0.9663
Epoch 104/200
469/469 - 1s - loss: 0.2166 - accuracy: 0.9655
Epoch 105/200
469/469 - 1s - loss: 0.2237 - accuracy: 0.9650
Epoch 106/200
469/469 - 1s - loss: 0.2183 - accuracy: 0.9671
Epoch 107/200
469/469 - 1s - loss: 0.2172 - accuracy: 0.9655
Epoch 108/200
469/469 - 1s - loss: 0.2172 - accuracy: 0.9663
Epoch 109/200
469/469 - 1s - loss: 0.2175 - accuracy: 0.9661
Epoch 110/200
469/469 - 1s - loss: 0.2137 - accuracy: 0.9665
Epoch 111/200
469/469 - 1s - loss: 0.2166 - accuracy: 0.9670
Epoch 112/200
469/469 - 1s - loss: 0.2170 - accuracy: 0.9661
Epoch 113/200
469/469 - 1s - loss: 0.2171 - accuracy: 0.9659
Epoch 114/200
469/469 - 1s - loss: 0.2198 - accuracy: 0.9650
Epoch 115/200
469/469 - 1s - loss: 0.2200 - accuracy: 0.9650
Epoch 116/200
469/469 - 1s - loss: 0.2161 - accuracy: 0.9667
Epoch 117/200
469/469 - 1s - loss: 0.2187 - accuracy: 0.9655
Epoch 118/200
469/469 - 1s - loss: 0.2178 - accuracy: 0.9663
Epoch 119/200
469/469 - 1s - loss: 0.2128 - accuracy: 0.9651
Epoch 120/200
469/469 - 1s - loss: 0.2127 - accuracy: 0.9666
Epoch 121/200
469/469 - 1s - loss: 0.2161 - accuracy: 0.9669
Epoch 122/200
469/469 - 1s - loss: 0.2171 - accuracy: 0.9658
Epoch 123/200
469/469 - 1s - loss: 0.2102 - accuracy: 0.9670
Epoch 124/200
469/469 - 1s - loss: 0.2122 - accuracy: 0.9668
Epoch 125/200
469/469 - 1s - loss: 0.2144 - accuracy: 0.9659
Epoch 126/200
469/469 - 1s - loss: 0.2109 - accuracy: 0.9674
Epoch 127/200
469/469 - 1s - loss: 0.2144 - accuracy: 0.9659
Epoch 128/200
469/469 - 1s - loss: 0.2119 - accuracy: 0.9670
Epoch 129/200
469/469 - 1s - loss: 0.2134 - accuracy: 0.9669
Epoch 130/200
469/469 - 1s - loss: 0.2154 - accuracy: 0.9657
Epoch 131/200
469/469 - 1s - loss: 0.2152 - accuracy: 0.9660
Epoch 132/200
469/469 - 1s - loss: 0.2135 - accuracy: 0.9667
Epoch 133/200
469/469 - 1s - loss: 0.2094 - accuracy: 0.9667
Epoch 134/200
469/469 - 1s - loss: 0.2131 - accuracy: 0.9665
Epoch 135/200
469/469 - 1s - loss: 0.2138 - accuracy: 0.9671
Epoch 136/200
469/469 - 1s - loss: 0.2170 - accuracy: 0.9659
Epoch 137/200
469/469 - 1s - loss: 0.2105 - accuracy: 0.9670
Epoch 138/200
469/469 - 1s - loss: 0.2126 - accuracy: 0.9671
Epoch 139/200
469/469 - 1s - loss: 0.2148 - accuracy: 0.9661
Epoch 140/200
469/469 - 1s - loss: 0.2110 - accuracy: 0.9668
Epoch 141/200
469/469 - 1s - loss: 0.2142 - accuracy: 0.9665
Epoch 142/200
469/469 - 1s - loss: 0.2116 - accuracy: 0.9661
Epoch 143/200
469/469 - 1s - loss: 0.2075 - accuracy: 0.9677
Epoch 144/200
469/469 - 1s - loss: 0.2055 - accuracy: 0.9681
Epoch 145/200
469/469 - 1s - loss: 0.2081 - accuracy: 0.9674
Epoch 146/200
469/469 - 1s - loss: 0.2113 - accuracy: 0.9662
Epoch 147/200
469/469 - 1s - loss: 0.2127 - accuracy: 0.9657
Epoch 148/200
469/469 - 1s - loss: 0.2061 - accuracy: 0.9680
Epoch 149/200
469/469 - 1s - loss: 0.2119 - accuracy: 0.9666
Epoch 150/200
469/469 - 1s - loss: 0.2042 - accuracy: 0.9672
Epoch 151/200
469/469 - 1s - loss: 0.2112 - accuracy: 0.9668
Epoch 152/200
469/469 - 1s - loss: 0.2025 - accuracy: 0.9688
Epoch 153/200
469/469 - 1s - loss: 0.2069 - accuracy: 0.9669
Epoch 154/200
469/469 - 1s - loss: 0.2104 - accuracy: 0.9676
Epoch 155/200
469/469 - 1s - loss: 0.2056 - accuracy: 0.9676
Epoch 156/200
469/469 - 1s - loss: 0.2141 - accuracy: 0.9666
Epoch 157/200
469/469 - 1s - loss: 0.2095 - accuracy: 0.9662
Epoch 158/200
469/469 - 1s - loss: 0.2070 - accuracy: 0.9663
Epoch 159/200
469/469 - 1s - loss: 0.2068 - accuracy: 0.9671
Epoch 160/200
469/469 - 1s - loss: 0.2078 - accuracy: 0.9667
Epoch 161/200
469/469 - 1s - loss: 0.2084 - accuracy: 0.9664
Epoch 162/200
469/469 - 1s - loss: 0.2105 - accuracy: 0.9659
Epoch 163/200
469/469 - 1s - loss: 0.2050 - accuracy: 0.9669
Epoch 164/200
469/469 - 1s - loss: 0.2070 - accuracy: 0.9673
Epoch 165/200
469/469 - 1s - loss: 0.2054 - accuracy: 0.9675
Epoch 166/200
469/469 - 1s - loss: 0.2007 - accuracy: 0.9680
Epoch 167/200
469/469 - 1s - loss: 0.2073 - accuracy: 0.9653
Epoch 168/200
469/469 - 1s - loss: 0.2073 - accuracy: 0.9670
Epoch 169/200
469/469 - 1s - loss: 0.2062 - accuracy: 0.9671
Epoch 170/200
469/469 - 1s - loss: 0.2043 - accuracy: 0.9677
Epoch 171/200
469/469 - 1s - loss: 0.2093 - accuracy: 0.9661
Epoch 172/200
469/469 - 1s - loss: 0.2042 - accuracy: 0.9671
Epoch 173/200
469/469 - 1s - loss: 0.2050 - accuracy: 0.9677
Epoch 174/200
469/469 - 1s - loss: 0.2053 - accuracy: 0.9673
Epoch 175/200
469/469 - 1s - loss: 0.2014 - accuracy: 0.9679
Epoch 176/200
469/469 - 1s - loss: 0.2043 - accuracy: 0.9680
Epoch 177/200
469/469 - 1s - loss: 0.2040 - accuracy: 0.9676
Epoch 178/200
469/469 - 1s - loss: 0.2018 - accuracy: 0.9678
Epoch 179/200
469/469 - 1s - loss: 0.2029 - accuracy: 0.9671
Epoch 180/200
469/469 - 1s - loss: 0.2010 - accuracy: 0.9675
Epoch 181/200
469/469 - 1s - loss: 0.2037 - accuracy: 0.9668
Epoch 182/200
469/469 - 1s - loss: 0.2042 - accuracy: 0.9676
Epoch 183/200
469/469 - 1s - loss: 0.2032 - accuracy: 0.9676
Epoch 184/200
469/469 - 1s - loss: 0.2044 - accuracy: 0.9668
Epoch 185/200
469/469 - 1s - loss: 0.2003 - accuracy: 0.9680
Epoch 186/200
469/469 - 1s - loss: 0.2037 - accuracy: 0.9671
Epoch 187/200
469/469 - 1s - loss: 0.1976 - accuracy: 0.9681
Epoch 188/200
469/469 - 1s - loss: 0.2001 - accuracy: 0.9682
Epoch 189/200
469/469 - 1s - loss: 0.1983 - accuracy: 0.9676
Epoch 190/200
469/469 - 1s - loss: 0.1994 - accuracy: 0.9679
Epoch 191/200
469/469 - 1s - loss: 0.1982 - accuracy: 0.9683
Epoch 192/200
469/469 - 1s - loss: 0.2054 - accuracy: 0.9674
Epoch 193/200
469/469 - 1s - loss: 0.2042 - accuracy: 0.9672
Epoch 194/200
469/469 - 1s - loss: 0.2026 - accuracy: 0.9668
Epoch 195/200
469/469 - 1s - loss: 0.2034 - accuracy: 0.9678
Epoch 196/200
469/469 - 1s - loss: 0.2048 - accuracy: 0.9675
Epoch 197/200
469/469 - 1s - loss: 0.2054 - accuracy: 0.9669
Epoch 198/200
469/469 - 1s - loss: 0.1963 - accuracy: 0.9691
Epoch 199/200
469/469 - 1s - loss: 0.2049 - accuracy: 0.9672
Epoch 200/200
469/469 - 1s - loss: 0.2040 - accuracy: 0.9679
313/313 [==============================] - 0s 602us/step - loss: 0.2115 - accuracy: 0.9732
test_loss: 0.211515411734581     test_accuracy: 0.9732000231742859

每一次训练的损失也在不断地下降、准确度在不断地上升,下面是第200次训练的数据:

Epoch 200/200
469/469 - 1s - loss: 0.2040 - accuracy: 0.9679

可以看到test_loss为0.2115,测试损失还算行,但是对于咱们的人脸识别程序来说,可就是太低了。

可以看到test_accuracy为0.9732000231742859,测试准确度达到了97.3%

test_loss: 0.211515411734581     test_accuracy: 0.9732000231742859

大家都运行成功了嘛,如果没有运行成功的话,请在评论下方留言啊,不败顽童博主看到啦会及时回复的哟。

3、卷积神经网络识别手写数字

卷积神经网络是一个由卷积层和池化层而组成的神经网络,它的优点在于减少了神经网络的参数。下面我们一起来看下卷积神经网络进行手写数字识别吧:

"""
@author 不败顽童
使用卷积神经网络训练手写0~9数字识别模型
"""
from keras.utils import to_categorical
from keras import models, layers
from keras.optimizers import RMSprop
from keras.datasets import mnist
import matplotlib.pyplot as plt

# 加载数据集
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

train_images = train_images.reshape((60000, 28, 28, 1)).astype('float') / 255
test_images = test_images.reshape((10000, 28, 28, 1)).astype('float') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)


# 搭建LeNet网络
def LeNet():
    network = models.Sequential()
    network.add(layers.Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
    network.add(layers.AveragePooling2D((2, 2)))
    network.add(layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu'))
    network.add(layers.AveragePooling2D((2, 2)))
    network.add(layers.Conv2D(filters=120, kernel_size=(3, 3), activation='relu'))
    network.add(layers.Flatten())
    network.add(layers.Dense(84, activation='relu'))
    network.add(layers.Dense(10, activation='softmax'))
    return network


network = LeNet()
network.compile(optimizer=RMSprop(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])

# 训练网络,用fit函数, epochs表示训练多少个回合, batch_size表示每次训练给多大的数据
history = network.fit(train_images, train_labels, epochs=30, batch_size=128, verbose=2)

test_loss, test_accuracy = network.evaluate(test_images, test_labels)
print("test_loss:", test_loss, "    test_accuracy:", test_accuracy)

#  使用history将训练集和测试集的loss和acc调出来
acc = history.history['accuracy']  # 训练集准确率
loss = history.history['loss']  # 训练集损失

plt.plot(acc, label='Training Accuracy')
plt.plot(loss, label='Training Loss')
plt.title('Training Accuracy and loss')
plt.legend()
plt.show()

这个大概需要等个2~3分钟的样子,大家不要慌,耐心等待就可以。

接下来我们来看一下运行结果:

D:\develop\PythonProject\venv\Scripts\python.exe D:/develop/PythonProject/DeepLearningWork/hand_write_recognition02/hand_write_recognition_of_CNN.py
2021-04-27 09:45:41.347990: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-04-27 09:45:41.348136: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-04-27 09:45:45.887816: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-27 09:45:45.889419: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-04-27 09:45:46.658241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1050 computeCapability: 6.1
coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s
2021-04-27 09:45:46.665101: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2021-04-27 09:45:46.671868: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2021-04-27 09:45:46.678511: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found
2021-04-27 09:45:46.685557: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cufft64_10.dll'; dlerror: cufft64_10.dll not found
2021-04-27 09:45:46.692338: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'curand64_10.dll'; dlerror: curand64_10.dll not found
2021-04-27 09:45:46.699218: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found
2021-04-27 09:45:46.706152: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cusparse64_11.dll'; dlerror: cusparse64_11.dll not found
2021-04-27 09:45:46.712747: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudnn64_8.dll'; dlerror: cudnn64_8.dll not found
2021-04-27 09:45:46.712872: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-04-27 09:45:46.713529: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-04-27 09:45:46.714082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-27 09:45:46.714275: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      
2021-04-27 09:45:46.714760: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-27 09:45:46.913314: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
Epoch 1/30
469/469 - 7s - loss: 0.3500 - accuracy: 0.8952
Epoch 2/30
469/469 - 5s - loss: 0.0916 - accuracy: 0.9723
Epoch 3/30
469/469 - 5s - loss: 0.0593 - accuracy: 0.9816
Epoch 4/30
469/469 - 5s - loss: 0.0447 - accuracy: 0.9861
Epoch 5/30
469/469 - 5s - loss: 0.0369 - accuracy: 0.9888
Epoch 6/30
469/469 - 6s - loss: 0.0307 - accuracy: 0.9904
Epoch 7/30
469/469 - 5s - loss: 0.0260 - accuracy: 0.9918
Epoch 8/30
469/469 - 5s - loss: 0.0225 - accuracy: 0.9932
Epoch 9/30
469/469 - 5s - loss: 0.0190 - accuracy: 0.9941
Epoch 10/30
469/469 - 5s - loss: 0.0163 - accuracy: 0.9948
Epoch 11/30
469/469 - 5s - loss: 0.0149 - accuracy: 0.9957
Epoch 12/30
469/469 - 6s - loss: 0.0132 - accuracy: 0.9962
Epoch 13/30
469/469 - 5s - loss: 0.0120 - accuracy: 0.9964
Epoch 14/30
469/469 - 5s - loss: 0.0110 - accuracy: 0.9966
Epoch 15/30
469/469 - 5s - loss: 0.0095 - accuracy: 0.9969
Epoch 16/30
469/469 - 5s - loss: 0.0088 - accuracy: 0.9973
Epoch 17/30
469/469 - 5s - loss: 0.0080 - accuracy: 0.9975
Epoch 18/30
469/469 - 5s - loss: 0.0070 - accuracy: 0.9978
Epoch 19/30
469/469 - 5s - loss: 0.0071 - accuracy: 0.9977
Epoch 20/30
469/469 - 5s - loss: 0.0063 - accuracy: 0.9982
Epoch 21/30
469/469 - 5s - loss: 0.0062 - accuracy: 0.9981
Epoch 22/30
469/469 - 6s - loss: 0.0049 - accuracy: 0.9983
Epoch 23/30
469/469 - 6s - loss: 0.0052 - accuracy: 0.9982
Epoch 24/30
469/469 - 7s - loss: 0.0050 - accuracy: 0.9984
Epoch 25/30
469/469 - 5s - loss: 0.0043 - accuracy: 0.9985
Epoch 26/30
469/469 - 5s - loss: 0.0041 - accuracy: 0.9986
Epoch 27/30
469/469 - 5s - loss: 0.0036 - accuracy: 0.9989
Epoch 28/30
469/469 - 5s - loss: 0.0038 - accuracy: 0.9988
Epoch 29/30
469/469 - 5s - loss: 0.0033 - accuracy: 0.9989
Epoch 30/30
469/469 - 5s - loss: 0.0023 - accuracy: 0.9992
313/313 [==============================] - 1s 2ms/step - loss: 0.0574 - accuracy: 0.9907
test_loss: 0.05741254240274429     test_accuracy: 0.9907000064849854

可以看到测试准确度比上面的前馈神经网络要精准一点,在测试过程中出现了一点点过拟合,需要大家进行下参数约束。

4、结束语

好的,新一期手写数字识别到此结束啦,如果大家还有什么问题的话,欢迎在评论下方留言,不败顽童博主看到留言会及时进行回复的哟。大家如果有兴趣观看更多有意思的编程项目,可以关注下我哟。