{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,16]],"date-time":"2024-08-16T06:32:02Z","timestamp":1723789922697},"reference-count":20,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2019,1,28]],"date-time":"2019-01-28T00:00:00Z","timestamp":1548633600000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["SIGCOMM Comput. Commun. Rev."],"published-print":{"date-parts":[[2019,1,28]]},"abstract":"\n In this paper, we present Deepcache a novel Framework for content caching, which can significantly boost cache performance. Our Framework is based on powerful deep recurrent neural network models. It comprises of two main components: i)\n Object Characteristics Predictor,<\/jats:italic>\n which builds upon deep LSTM Encoder-Decoder model to predict the future characteristics of an object (such as object popularity) - to the best of our knowledge, we are the first to propose LSTM Encoder-Decoder model for content caching; ii)\n a caching policy component,<\/jats:italic>\n which accounts for predicted information of objects to make smart caching decisions. In our thorough experiments, we show that applying Deepcache Framework to existing cache policies, such as LRU and k-LRU, significantly boosts the number of cache hits.\n <\/jats:p>","DOI":"10.1145\/3310165.3310174","type":"journal-article","created":{"date-parts":[[2019,1,29]],"date-time":"2019-01-29T13:16:22Z","timestamp":1548767782000},"page":"64-69","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":18,"title":["Making content caching policies 'smart' using the deepcache framework"],"prefix":"10.1145","volume":"48","author":[{"given":"Arvind","family":"Narayanan","sequence":"first","affiliation":[{"name":"University of Minnesota"}]},{"given":"Saurabh","family":"Verma","sequence":"additional","affiliation":[{"name":"University of Minnesota"}]},{"given":"Eman","family":"Ramadan","sequence":"additional","affiliation":[{"name":"University of Minnesota"}]},{"given":"Pariya","family":"Babaie","sequence":"additional","affiliation":[{"name":"University of Minnesota"}]},{"given":"Zhi-Li","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Minnesota"}]}],"member":"320","published-online":{"date-parts":[[2019,1,28]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Forecast and methodology","author":"Cisco","year":"2016","unstructured":"Cisco visual networking index : Forecast and methodology , 2016 --2021, 2017. Cisco visual networking index: Forecast and methodology, 2016--2021, 2017."},{"key":"e_1_2_1_2_1","volume-title":"Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473","author":"Bahdanau D.","year":"2014","unstructured":"Bahdanau , D. , Cho , K. , and Bengio , Y . Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 ( 2014 ). Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3078505.3078560"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFCOM.2001.916637"},{"key":"e_1_2_1_5_1","volume-title":"Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555","author":"Chung J.","year":"2014","unstructured":"Chung , J. , Gulcehre , C. , Cho , K. , and Bengio , Y . Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 ( 2014 ). Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2896377.2901459"},{"key":"e_1_2_1_7_1","volume-title":"Proceedings of the 31st International Conference on International Conference on Machine Learning -","volume":"32","author":"Graves A.","year":"2014","unstructured":"Graves , A. , and Jaitly , N . Towards end-to-end speech recognition with recurrent neural networks . In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32 ( 2014 ), ICML'14, JMLR.org, pp. II-1764--II-1772. Graves, A., and Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32 (2014), ICML'14, JMLR.org, pp. II-1764--II-1772."},{"key":"e_1_2_1_8_1","volume-title":"Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623","author":"Gregor K.","year":"2015","unstructured":"Gregor , K. , Danihelka , I. , Graves , A. , Rezende , D. J. , and Wierstra , D . Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623 ( 2015 ). Gregor, K., Danihelka, I., Graves, A., Rezende, D. J., and Wierstra, D. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623 (2015)."},{"key":"e_1_2_1_9_1","volume-title":"Learning memory access patterns. arXiv preprint arXi:1803.02329","author":"Hashemi M.","year":"2018","unstructured":"Hashemi , M. , Learning memory access patterns. arXiv preprint arXi:1803.02329 ( 2018 ). Hashemi, M., et al. Learning memory access patterns. arXiv preprint arXi:1803.02329 (2018)."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1658939.1658941"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/1282380.1282402"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3098822.3098843"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2014.6848145"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.21437\/Interspeech.2010-343"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICMEW.2015.7169811"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2017.2787979"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/2637364.2592021"},{"key":"e_1_2_1_19_1","volume-title":"Sequence to sequence learning with neural networks. NIPS'14","author":"Sutskever I.","unstructured":"Sutskever , I. , Vinyals , O. , and Le , Q. V . Sequence to sequence learning with neural networks. NIPS'14 , MIT Press . Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. NIPS'14, MIT Press."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/776322.776327"}],"container-title":["ACM SIGCOMM Computer Communication Review"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3310165.3310174","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T10:18:39Z","timestamp":1672568319000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3310165.3310174"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,1,28]]},"references-count":20,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2019,1,28]]}},"alternative-id":["10.1145\/3310165.3310174"],"URL":"https:\/\/doi.org\/10.1145\/3310165.3310174","relation":{},"ISSN":["0146-4833"],"issn-type":[{"value":"0146-4833","type":"print"}],"subject":[],"published":{"date-parts":[[2019,1,28]]},"assertion":[{"value":"2019-01-28","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}