{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,10,30]],"date-time":"2024-10-30T20:45:46Z","timestamp":1730321146040,"version":"3.28.0"},"publisher-location":"New York, NY, USA","reference-count":27,"publisher":"ACM","license":[{"start":{"date-parts":[[2019,6,2]],"date-time":"2019-06-02T00:00:00Z","timestamp":1559433600000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2019,6,2]]},"DOI":"10.1145\/3316781.3324696","type":"proceedings-article","created":{"date-parts":[[2019,5,23]],"date-time":"2019-05-23T18:07:13Z","timestamp":1558634833000},"page":"1-6","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":18,"title":["ReForm"],"prefix":"10.1145","author":[{"given":"Zirui","family":"Xu","sequence":"first","affiliation":[{"name":"George Mason University, Fairfax, Virginia"}]},{"given":"Fuxun","family":"Yu","sequence":"additional","affiliation":[{"name":"George Mason University, Fairfax, Virginia"}]},{"given":"Chenchen","family":"Liu","sequence":"additional","affiliation":[{"name":"Clarkson University, Potsdam, New York"}]},{"given":"Xiang","family":"Chen","sequence":"additional","affiliation":[{"name":"George Mason University, Fairfax, Virginia"}]}],"member":"320","published-online":{"date-parts":[[2019,6,2]]},"reference":[{"key":"e_1_3_2_1_1_1","first-page":"265","article-title":"Tensorflow: A System for Large-scale Machine Learning","volume":"16","author":"Abadi Mart\u00edn","year":"2016","unstructured":"Mart\u00edn Abadi and 2016 . Tensorflow: A System for Large-scale Machine Learning . In OSDI , Vol. 16. 265 -- 283 . Mart\u00edn Abadi and et al. 2016. Tensorflow: A System for Large-scale Machine Learning. In OSDI, Vol. 16. 265--283.","journal-title":"OSDI"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"crossref","unstructured":"Stephen Boyd. 2011. Alternating direction method of multipliers. In Talk at NIPS workshop on optimization and machine learning. Stephen Boyd. 2011. Alternating direction method of multipliers. In Talk at NIPS workshop on optimization and machine learning.","DOI":"10.1561\/9781601984616"},{"key":"e_1_3_2_1_3_1","volume-title":"Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint","author":"Chollet Fran\u00e7ois","year":"2017","unstructured":"Fran\u00e7ois Chollet . 2017 . Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint (2017), 1610--02357. Fran\u00e7ois Chollet. 2017. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv preprint (2017), 1610--02357."},{"key":"e_1_3_2_1_4_1","unstructured":"Matthieu Courbariaux and etal 2016. Binarized neural networks: Training Deep Neural Networks with Weights and Activations Constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830 (2016). Matthieu Courbariaux and et al. 2016. Binarized neural networks: Training Deep Neural Networks with Weights and Activations Constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830 (2016)."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3241539.3241559"},{"key":"e_1_3_2_1_6_1","unstructured":"Ariel Gordon and etal 2017. MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks. arXiv preprint arXiv:1711.06798 (2017). Ariel Gordon and et al. 2017. MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks. arXiv preprint arXiv:1711.06798 (2017)."},{"key":"e_1_3_2_1_7_1","unstructured":"Song Han and etal 2015. Deep compression: Compressing Deep Neural Networks with Pruning Trained Quantization and Huffman Coding. arXiv preprint arXiv:1510.00149 (2015). Song Han and et al. 2015. Deep compression: Compressing Deep Neural Networks with Pruning Trained Quantization and Huffman Coding. arXiv preprint arXiv:1510.00149 (2015)."},{"volume-title":"Han and et al","year":"2015","key":"e_1_3_2_1_8_1","unstructured":"Song. Han and et al . 2015 . Learning Both Weights and Connections for Efficient Neural network. In Advances in neural information processing systems. 1135--1143. Song. Han and et al. 2015. Learning Both Weights and Connections for Efficient Neural network. In Advances in neural information processing systems. 1135--1143."},{"volume-title":"Proc. of ECCV.","author":"Yihui","key":"e_1_3_2_1_9_1","unstructured":"Yihui He and et al. 2018. Amc: Automl for Model Compression and Acceleration on Mobile Devices . In Proc. of ECCV. Yihui He and et al. 2018. Amc: Automl for Model Compression and Acceleration on Mobile Devices. In Proc. of ECCV."},{"key":"e_1_3_2_1_10_1","volume-title":"Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861","author":"Howard Andrew G","year":"2017","unstructured":"Andrew G Howard and 2017 . Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861 (2017). Andrew G Howard and et al. 2017. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861 (2017)."},{"key":"e_1_3_2_1_11_1","unstructured":"Hengyuan Hu and etal 2016. Network trimming: A data-driven Neuron Pruning Approach Towards Efficient Deep Architectures. arXiv preprint arXiv:1607.03250. Hengyuan Hu and et al. 2016. Network trimming: A data-driven Neuron Pruning Approach Towards Efficient Deep Architectures. arXiv preprint arXiv:1607.03250."},{"key":"e_1_3_2_1_12_1","volume-title":"Squeezenet: Alexnet-level Accuracy with 50x Fewer Parameters and Less 0.5mb Model Size. arXiv preprint arXiv:1602.07360.","author":"Iandola Forrest N","year":"2016","unstructured":"Forrest N Iandola and 2016 . Squeezenet: Alexnet-level Accuracy with 50x Fewer Parameters and Less 0.5mb Model Size. arXiv preprint arXiv:1602.07360. Forrest N Iandola and et al. 2016. Squeezenet: Alexnet-level Accuracy with 50x Fewer Parameters and Less 0.5mb Model Size. arXiv preprint arXiv:1602.07360."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Max Jaderberg and etal 2014. Speeding up Convolutional Neural Networks with Low Rank Expansions. arXiv preprint arXiv:1405.3866 (2014). Max Jaderberg and et al. 2014. Speeding up Convolutional Neural Networks with Low Rank Expansions. arXiv preprint arXiv:1405.3866 (2014).","DOI":"10.5244\/C.28.88"},{"volume-title":"Proc. of NIPS.","author":"Alex","key":"e_1_3_2_1_14_1","unstructured":"Alex Krizhevsky and et al. 2012. Imagenet Classification with Deep Convolutional Neural Networks . In Proc. of NIPS. Alex Krizhevsky and et al. 2012. Imagenet Classification with Deep Convolutional Neural Networks. In Proc. of NIPS."},{"key":"e_1_3_2_1_15_1","unstructured":"Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. (2009). Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. (2009)."},{"key":"e_1_3_2_1_16_1","unstructured":"Yann LeCun Corinna Cortes and CJ Burges. 2010. MNIST Handwritten Digit Database. AT&T Labs {Online}. Available: http:\/\/yann. lecun. com\/exdb\/mnist. Yann LeCun Corinna Cortes and CJ Burges. 2010. MNIST Handwritten Digit Database. AT&T Labs {Online}. Available: http:\/\/yann. lecun. com\/exdb\/mnist."},{"key":"e_1_3_2_1_17_1","unstructured":"Yann LeCun and etal 2015. LeNet-5 convolutional neural networks. URL: http:\/\/yann. lecun. com\/exdb\/lenet (2015) 20. Yann LeCun and et al. 2015. LeNet-5 convolutional neural networks. URL: http:\/\/yann. lecun. com\/exdb\/lenet (2015) 20."},{"key":"e_1_3_2_1_18_1","unstructured":"Hao Li and etal 2016. Pruning Filters for Efficient Convnets. arXiv preprint arXiv:1608.08710 (2016). Hao Li and et al. 2016. Pruning Filters for Efficient Convnets. arXiv preprint arXiv:1608.08710 (2016)."},{"key":"e_1_3_2_1_19_1","unstructured":"Sicong Liu and etal 2018. On-Demand Deep Model Compression for Mobile Devices: A Usage-Driven Model Selection Framework. (2018). Sicong Liu and et al. 2018. On-Demand Deep Model Compression for Mobile Devices: A Usage-Driven Model Selection Framework. (2018)."},{"key":"e_1_3_2_1_20_1","unstructured":"Ningning Ma and etal 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:1807.11164 (2018). Ningning Ma and et al. 2018. Shufflenet v2: Practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:1807.11164 (2018)."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2015.2494536"},{"volume-title":"Proc. of ICASSP.","author":"Changhao","key":"e_1_3_2_1_22_1","unstructured":"Changhao Shan and et al. 2018. Attention-based End-to-end Speech Recognition on Voice Search . In Proc. of ICASSP. Changhao Shan and et al. 2018. Attention-based End-to-end Speech Recognition on Voice Search. In Proc. of ICASSP."},{"key":"e_1_3_2_1_23_1","volume-title":"Very Deep Convolutional Networks for Large-scale Image Recognition. arXiv preprint arXiv:1409.1556","author":"Simonyan Karen","year":"2014","unstructured":"Karen Simonyan and Andrew Zisserman . 2014. Very Deep Convolutional Networks for Large-scale Image Recognition. arXiv preprint arXiv:1409.1556 ( 2014 ). Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-scale Image Recognition. arXiv preprint arXiv:1409.1556 (2014)."},{"key":"e_1_3_2_1_24_1","unstructured":"Yue Wang and etal 2018. EnergyNet: Energy-Efficient Dynamic Inference. Yue Wang and et al. 2018. EnergyNet: Energy-Efficient Dynamic Inference."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/3218603.3218652"},{"key":"e_1_3_2_1_26_1","unstructured":"Tien-Ju Yang and etal 2016. Designing Energy-efficient Convolutional Neural Networks using Energy-aware Pruning. arXiv preprint arXiv:1611.05128 (2016). Tien-Ju Yang and et al. 2016. Designing Energy-efficient Convolutional Neural Networks using Energy-aware Pruning. arXiv preprint arXiv:1611.05128 (2016)."},{"key":"e_1_3_2_1_27_1","first-page":"46","article-title":"Netadapt","volume":"41","author":"Yang Tien-Ju","year":"2018","unstructured":"Tien-Ju Yang and 2018 . Netadapt : Platform-aware Neural Network Adaptation for Mobile Applications. Energy 41 (2018), 46 . Tien-Ju Yang and et al. 2018. Netadapt: Platform-aware Neural Network Adaptation for Mobile Applications. Energy 41 (2018), 46.","journal-title":"Platform-aware Neural Network Adaptation for Mobile Applications. Energy"}],"event":{"name":"DAC '19: The 56th Annual Design Automation Conference 2019","sponsor":["SIGDA ACM Special Interest Group on Design Automation","IEEE-CEDA","SIGBED ACM Special Interest Group on Embedded Systems"],"location":"Las Vegas NV USA","acronym":"DAC '19"},"container-title":["Proceedings of the 56th Annual Design Automation Conference 2019"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3316781.3324696","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,6]],"date-time":"2023-01-06T03:49:52Z","timestamp":1672976992000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3316781.3324696"}},"subtitle":["Static and Dynamic Resource-Aware DNN Reconfiguration Framework for Mobile Device"],"short-title":[],"issued":{"date-parts":[[2019,6,2]]},"references-count":27,"alternative-id":["10.1145\/3316781.3324696","10.1145\/3316781"],"URL":"https:\/\/doi.org\/10.1145\/3316781.3324696","relation":{},"subject":[],"published":{"date-parts":[[2019,6,2]]},"assertion":[{"value":"2019-06-02","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}