1.视频教程:
B站、网易云课堂、腾讯课堂
2.代码地址:
Gitee
Github
3.存储地址:
Google云
百度云:
提取码:

《Learning both Weights and Connections for Efficient Neural Networks》
—为高效神经网络同时学习权重和连接
作者:Song Han、Jeff Pool、etc
单位:Stanford University、NVIDIA
发表会议及时间:NIPS 2015

Submission history
From: Song Han [view email]
[v1] Mon, 8 Jun 2015 19:28:43 UTC (2,968 KB)
[v2] Wed, 29 Jul 2015 22:27:31 UTC (1,484 KB)
[v3] Fri, 30 Oct 2015 23:29:27 UTC (1,075 KB)


论文:https://arxiv.org/abs/1506.02626


  • Abstract
    Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.

通过在训练过程中改变模型结构,可以减少模型参数量,降低模型计算量,加快模型推理速度:
Learning both Weights and Connections for Efficient Neural Networks_参数设置


模型通过训练学习到的权重不仅可以用来推理出最后结果,也可以当作判断神经元重要性的指标


同时训练权重和要剪的神经元可以降低损失。
Learning both Weights and Connections for Efficient Neural Networks_数据集_02

  • 1 在数据集上对未压缩的模型进行完整的训练,直至收敛。
  • 2 在经过完整训练的未压缩模型基础上,剪掉一批权重小于阈值的神经元。
  • 3 在数据集上对剪过后的模型进行权重微调,使其恢复精度。

一 论文导读

 

FLOPS:每秒浮点运算次数
表示计算量,越大表示越耗时

Learning both Weights and Connections for Efficient Neural Networks_数据集_03
Learning both Weights and Connections for Efficient Neural Networks_参数设置_04


耗能(电)情况

Learning both Weights and Connections for Efficient Neural Networks_代码实现_05

Learning both Weights and Connections for Efficient Neural Networks_数据集_06


成果

在Imagenet数据集上,本文的方法在将AlexNet压缩了9倍参数的情况下,精度基本保持不变,还有等等。
Learning both Weights and Connections for Efficient Neural Networks_数据集_07


意义

  • 将剪枝引入模型训练过程,进行迭代训练,降低了剪枝导致的精度损失
  • 将神经元的权重值作为判断神经元重要性的依据,简化了神经元重要性计算的过程
  • 讨论了各种正则化对剪枝过程的影响

二 论文精读

 

三 代码实现

 

四 问题思索

 

五 实验参数设置

 

六 额外补充

 

加速模型推理的方法:

1.SVD和量化

2.用池化层代替全连接层

3.其他剪枝方法。如基于海森矩阵