{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,7]],"date-time":"2024-08-07T07:36:19Z","timestamp":1723016179397},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2018,7]]},"abstract":"Recent years have witnessed the great success of convolutional neural networks (CNNs) in many related fields. However, its huge model size and computation complexity bring in difficulty when deploying CNNs in some scenarios, like embedded system with low computation power. To address this issue, many works have been proposed to prune filters in CNNs to reduce computation. However, they mainly focus on seeking which filters are unimportant in a layer and then prune filters layer by layer or globally. In this paper, we argue that the pruning order is also very significant for model pruning. We propose a novel approach to figure out which layers should be pruned in each step.\u00a0 First, we utilize a long short-term memory (LSTM) to learn the hierarchical characteristics of a network and generate a pruning decision for each layer, which is the main difference from previous works. Next, a channel-based method is adopted to evaluate the importance of filters in a to-be-pruned layer, followed by an accelerated recovery step. Experimental results demonstrate that our approach is capable of reducing 70.1% FLOPs for VGG and 47.5% for Resnet-56 with comparable accuracy. Also, the learning results seem to reveal the sensitivity of each network layer.<\/jats:p>","DOI":"10.24963\/ijcai.2018\/445","type":"proceedings-article","created":{"date-parts":[[2018,7,5]],"date-time":"2018-07-05T01:49:10Z","timestamp":1530755350000},"page":"3205-3211","source":"Crossref","is-referenced-by-count":17,"title":["Where to Prune: Using LSTM to Guide End-to-end Pruning"],"prefix":"10.24963","author":[{"given":"Jing","family":"Zhong","sequence":"first","affiliation":[{"name":"Beijing National Laboratory for Information Science and Technology (BNList), School of Software, Tsinghua University, Beijing 100084, China"}]},{"given":"Guiguang","family":"Ding","sequence":"additional","affiliation":[{"name":"Beijing National Laboratory for Information Science and Technology (BNList), School of Software, Tsinghua University, Beijing 100084, China"}]},{"given":"Yuchen","family":"Guo","sequence":"additional","affiliation":[{"name":"Beijing National Laboratory for Information Science and Technology (BNList), School of Software, Tsinghua University, Beijing 100084, China"}]},{"given":"Jungong","family":"Han","sequence":"additional","affiliation":[{"name":"School of Computing & Communications, Lancaster University, UK"}]},{"given":"Bin","family":"Wang","sequence":"additional","affiliation":[{"name":"Beijing National Laboratory for Information Science and Technology (BNList), School of Software, Tsinghua University, Beijing 100084, China"}]}],"member":"10584","event":{"number":"27","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2018","name":"Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}","start":{"date-parts":[[2018,7,13]]},"theme":"Artificial Intelligence","location":"Stockholm, Sweden","end":{"date-parts":[[2018,7,19]]}},"container-title":["Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2018,7,5]],"date-time":"2018-07-05T01:52:52Z","timestamp":1530755572000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2018\/445"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2018,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2018\/445","relation":{},"subject":[],"published":{"date-parts":[[2018,7]]}}}