{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T17:23:38Z","timestamp":1732037018025},"publisher-location":"New York, NY, USA","reference-count":45,"publisher":"ACM","license":[{"start":{"date-parts":[[2018,11,3]],"date-time":"2018-11-03T00:00:00Z","timestamp":1541203200000},"content-version":"vor","delay-in-days":365,"URL":"http:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000001","name":"NSF","doi-asserted-by":"publisher","award":["IIS-1719097"],"id":[{"id":"10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2017,11,3]]},"DOI":"10.1145\/3128572.3140448","type":"proceedings-article","created":{"date-parts":[[2017,11,3]],"date-time":"2017-11-03T12:36:10Z","timestamp":1509712570000},"update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":912,"title":["ZOO"],"prefix":"10.1145","author":[{"given":"Pin-Yu","family":"Chen","sequence":"first","affiliation":[{"name":"IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA"}]},{"given":"Huan","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of California, Davis, Davis, CA & IBM T. J. Watson Research Center, Yorktown Heights, NY, USA"}]},{"given":"Yash","family":"Sharma","sequence":"additional","affiliation":[{"name":"IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA"}]},{"given":"Jinfeng","family":"Yi","sequence":"additional","affiliation":[{"name":"IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA"}]},{"given":"Cho-Jui","family":"Hsieh","sequence":"additional","affiliation":[{"name":"University of California, Davis, Davis, CA, USA"}]}],"member":"320","published-online":{"date-parts":[[2017,11,3]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-010-5188-5"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/1128817.1128824"},{"key":"e_1_3_2_1_3_1","unstructured":"Dimitri P Bertsekas. Nonlinear programming. Dimitri P Bertsekas. Nonlinear programming."},{"key":"e_1_3_2_1_4_1","volume-title":"Evasion attacks against machine learning at test time Joint European Conference on Machine Learning and Knowledge Discovery in Databases","author":"Biggio Battista","unstructured":"Battista Biggio , Igino Corona , Davide Maiorca , Blaine Nelson , Nedim \u0160rndi\u0107 , Pavel Laskov , Giorgio Giacinto , and Fabio Roli . 2013. Evasion attacks against machine learning at test time Joint European Conference on Machine Learning and Knowledge Discovery in Databases . Springer , 387--402. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim \u0160rndi\u0107, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. 2013. Evasion attacks against machine learning at test time Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 387--402."},{"key":"e_1_3_2_1_5_1","volume-title":"Proceedings of the International Coference on International Conference on Machine Learning. 1467--1474","author":"Biggio Battista","year":"2012","unstructured":"Battista Biggio , Blaine Nelson , and Pavel Laskov . 2012 . Poisoning Attacks Against Support Vector Machines . Proceedings of the International Coference on International Conference on Machine Learning. 1467--1474 . Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks Against Support Vector Machines. Proceedings of the International Coference on International Conference on Machine Learning. 1467--1474."},{"key":"e_1_3_2_1_6_1","volume-title":"Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks. arXiv preprint arXiv:1707.02476","author":"Bradshaw John","year":"2017","unstructured":"John Bradshaw , Alexander G. de G. Matthews , and Zoubin Ghahramani . 2017. Adversarial Examples , Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks. arXiv preprint arXiv:1707.02476 ( 2017 ). John Bradshaw, Alexander G. de G. Matthews, and Zoubin Ghahramani. 2017. Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks. arXiv preprint arXiv:1707.02476 (2017)."},{"key":"e_1_3_2_1_7_1","volume-title":"Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. arXiv preprint arXiv:1705.07263","author":"Carlini Nicholas","year":"2017","unstructured":"Nicholas Carlini and David Wagner . 2017. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. arXiv preprint arXiv:1705.07263 ( 2017 ). Nicholas Carlini and David Wagner. 2017. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. arXiv preprint arXiv:1705.07263 (2017)."},{"key":"e_1_3_2_1_8_1","volume-title":"Towards evaluating the robustness of neural networks IEEE Symposium on Security and Privacy (SP)","author":"Carlini Nicholas","unstructured":"Nicholas Carlini and David Wagner . 2017. Towards evaluating the robustness of neural networks IEEE Symposium on Security and Privacy (SP) . IEEE , 39--57. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks IEEE Symposium on Security and Privacy (SP). IEEE, 39--57."},{"key":"e_1_3_2_1_9_1","volume-title":"Robust Physical-World Attacks on Machine Learning Models. arXiv preprint arXiv:1707.08945","author":"Evtimov Ivan","year":"2017","unstructured":"Ivan Evtimov , Kevin Eykholt , Earlence Fernandes , Tadayoshi Kohno , Bo Li , Atul Prakash , Amir Rahmati , and Dawn Song . 2017. Robust Physical-World Attacks on Machine Learning Models. arXiv preprint arXiv:1707.08945 ( 2017 ). Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust Physical-World Attacks on Machine Learning Models. arXiv preprint arXiv:1707.08945 (2017)."},{"key":"e_1_3_2_1_10_1","volume-title":"Gardner","author":"Feinman Reuben","year":"2017","unstructured":"Reuben Feinman , Ryan R. Curtin , Saurabh Shintre , and Andrew B . Gardner . 2017 . Detecting Adversarial Samples from Artifacts . arXiv preprint arXiv:1703.00410 (2017). Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. 2017. Detecting Adversarial Samples from Artifacts. arXiv preprint arXiv:1703.00410 (2017)."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1137\/120880811"},{"key":"e_1_3_2_1_12_1","volume-title":"Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572","author":"Goodfellow Ian J.","year":"2014","unstructured":"Ian J. Goodfellow , Jonathon Shlens , and Christian Szegedy . 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 ( 2014 ). Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)."},{"key":"e_1_3_2_1_13_1","volume-title":"On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280","author":"Grosse Kathrin","year":"2017","unstructured":"Kathrin Grosse , Praveen Manoharan , Nicolas Papernot , Michael Backes , and Patrick McDaniel . 2017. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 ( 2017 ). Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel. 2017. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017)."},{"key":"e_1_3_2_1_14_1","volume-title":"Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435","author":"Grosse Kathrin","year":"2016","unstructured":"Kathrin Grosse , Nicolas Papernot , Praveen Manoharan , Michael Backes , and Patrick McDaniel . 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 ( 2016 ). Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435 (2016)."},{"key":"e_1_3_2_1_15_1","volume-title":"Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531","author":"Hinton Geoffrey","year":"2015","unstructured":"Geoffrey Hinton , Oriol Vinyals , and Jeff Dean . 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 ( 2015 ). Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)."},{"key":"e_1_3_2_1_16_1","volume-title":"Black-Box Attacks against RNN based Malware Detection Algorithms. arXiv preprint arXiv:1705.08131","author":"Hu Weiwei","year":"2017","unstructured":"Weiwei Hu and Ying Tan . 2017. Black-Box Attacks against RNN based Malware Detection Algorithms. arXiv preprint arXiv:1705.08131 ( 2017 ). Weiwei Hu and Ying Tan. 2017. Black-Box Attacks against RNN based Malware Detection Algorithms. arXiv preprint arXiv:1705.08131 (2017)."},{"key":"e_1_3_2_1_17_1","volume-title":"Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. arXiv preprint arXiv:1702.05983","author":"Hu Weiwei","year":"2017","unstructured":"Weiwei Hu and Ying Tan . 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. arXiv preprint arXiv:1702.05983 ( 2017 ). Weiwei Hu and Ying Tan. 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. arXiv preprint arXiv:1702.05983 (2017)."},{"key":"e_1_3_2_1_18_1","volume-title":"Safety verification of deep neural networks. arXiv preprint arXiv:1610.06940","author":"Huang Xiaowei","year":"2016","unstructured":"Xiaowei Huang , Marta Kwiatkowska , Sen Wang , and Min Wu. 2016. Safety verification of deep neural networks. arXiv preprint arXiv:1610.06940 ( 2016 ). Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2016. Safety verification of deep neural networks. arXiv preprint arXiv:1610.06940 (2016)."},{"key":"e_1_3_2_1_19_1","volume-title":"Robust convolutional neural networks under adversarial noise. arXiv preprint arXiv:1511.06306","author":"Jin Jonghoon","year":"2015","unstructured":"Jonghoon Jin , Aysegul Dundar , and Eugenio Culurciello . 2015. Robust convolutional neural networks under adversarial noise. arXiv preprint arXiv:1511.06306 ( 2015 ). Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello. 2015. Robust convolutional neural networks under adversarial noise. arXiv preprint arXiv:1511.06306 (2015)."},{"key":"e_1_3_2_1_20_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik","year":"2014","unstructured":"Diederik Kingma and Jimmy Ba . 2014 . Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_3_2_1_21_1","volume-title":"Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin , Ian Goodfellow , and Samy Bengio . 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 ( 2016 ). Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)."},{"key":"e_1_3_2_1_22_1","volume-title":"Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236","author":"Kurakin Alexey","year":"2016","unstructured":"Alexey Kurakin , Ian Goodfellow , and Samy Bengio . 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 ( 2016 ). Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4614-7946-8"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1038\/nature14539"},{"key":"e_1_3_2_1_25_1","unstructured":"Xiangru Lian Huan Zhang Cho-Jui Hsieh Yijun Huang and Ji Liu. 2016. A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order Advances in Neural Information Processing Systems. 3054--3062. Xiangru Lian Huan Zhang Cho-Jui Hsieh Yijun Huang and Ji Liu. 2016. A comprehensive linear speedup analysis for asynchronous stochastic parallel optimization from zeroth-order to first-order Advances in Neural Information Processing Systems. 3054--3062."},{"key":"e_1_3_2_1_26_1","volume-title":"Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770","author":"Liu Yanpei","year":"2016","unstructured":"Yanpei Liu , Xinyun Chen , Chang Liu , and Dawn Song . 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 ( 2016 ). Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)."},{"key":"e_1_3_2_1_27_1","volume-title":"Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint arXiv:1706.06083","author":"Madry Aleksander","year":"2017","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint arXiv:1706.06083 ( 2017 ). Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint arXiv:1706.06083 (2017)."},{"key":"e_1_3_2_1_28_1","volume-title":"On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267","author":"Metzen Jan Hendrik","year":"2017","unstructured":"Jan Hendrik Metzen , Tim Genewein , Volker Fischer , and Bastian Bischoff . 2017. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 ( 2017 ). Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 (2017)."},{"key":"e_1_3_2_1_29_1","volume-title":"Universal adversarial perturbations. arXiv preprint arXiv:1610.08401","author":"Fawzi Alhussein","year":"2016","unstructured":"Seyed-Mohsen, Moosavi-Dezfooli, Alhussein Fawzi , Omar Fawzi , and Pascal Frossard . 2016. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401 ( 2016 ). Seyed-Mohsen, Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2016. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401 (2016)."},{"key":"e_1_3_2_1_30_1","volume-title":"Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554","author":"Fawzi Alhussein","year":"2017","unstructured":"Seyed-Mohsen, Moosavi-Dezfooli, Alhussein Fawzi , Omar Fawzi , Pascal Frossard , and Stefano Soatto . 2017. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554 ( 2017 ). Seyed-Mohsen, Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal Frossard, and Stefano Soatto. 2017. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554 (2017)."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Seyed-Mohsen Moosavi-Dezfooli Alhussein Fawzi and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574--2582. Seyed-Mohsen Moosavi-Dezfooli Alhussein Fawzi and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574--2582.","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_2_1_32_1","unstructured":"Yurii Nesterov etal 2011. Random gradient-free minimization of convex functions. Technical Report. Universit\u00e9 catholique de Louvain Center for Operations Research and Econometrics (CORE). Yurii Nesterov et al. 2011. Random gradient-free minimization of convex functions. Technical Report. Universit\u00e9 catholique de Louvain Center for Operations Research and Econometrics (CORE)."},{"key":"e_1_3_2_1_33_1","volume-title":"Extending Defensive Distillation. arXiv preprint arXiv:1705.05264","author":"Papernot Nicolas","year":"2017","unstructured":"Nicolas Papernot and Patrick McDaniel , 2017. Extending Defensive Distillation. arXiv preprint arXiv:1705.05264 ( 2017 ). Nicolas Papernot and Patrick McDaniel, 2017. Extending Defensive Distillation. arXiv preprint arXiv:1705.05264 (2017)."},{"key":"e_1_3_2_1_34_1","volume-title":"Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277","author":"Papernot Nicolas","year":"2016","unstructured":"Nicolas Papernot , Patrick McDaniel , and Ian Goodfellow . 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 ( 2016 ). Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)."},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick McDaniel Ian Goodfellow Somesh Jha Z. Berkay Celik and Ananthram Swami. 2017. Practical black-box attacks against machine learning Proceedings of the ACM on Asia Conference on Computer and Communications Security. ACM 506--519. Nicolas Papernot Patrick McDaniel Ian Goodfellow Somesh Jha Z. Berkay Celik and Ananthram Swami. 2017. Practical black-box attacks against machine learning Proceedings of the ACM on Asia Conference on Computer and Communications Security. ACM 506--519.","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick McDaniel Somesh Jha Matt Fredrikson Z. Berkay Celik and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings IEEE European Symposium on Security and Privacy (EuroS&P). 372--387. Nicolas Papernot Patrick McDaniel Somesh Jha Matt Fredrikson Z. Berkay Celik and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings IEEE European Symposium on Security and Privacy (EuroS&P). 372--387.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick McDaniel Ananthram Swami and Richard Harang 2016. Crafting adversarial input sequences for recurrent neural networks IEEE Military Communications Conference (MILCOM). 49--54. Nicolas Papernot Patrick McDaniel Ananthram Swami and Richard Harang 2016. Crafting adversarial input sequences for recurrent neural networks IEEE Military Communications Conference (MILCOM). 49--54.","DOI":"10.1109\/MILCOM.2016.7795300"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"crossref","unstructured":"Nicolas Papernot Patrick McDaniel Xi Wu Somesh Jha and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks IEEE Symposium on Security and Privacy (SP). 582--597. Nicolas Papernot Patrick McDaniel Xi Wu Somesh Jha and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks IEEE Symposium on Security and Privacy (SP). 582--597.","DOI":"10.1109\/SP.2016.41"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"crossref","unstructured":"Christian Szegedy Vincent Vanhoucke Sergey Ioffe Jon Shlens and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2818--2826. Christian Szegedy Vincent Vanhoucke Sergey Ioffe Jon Shlens and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2818--2826.","DOI":"10.1109\/CVPR.2016.308"},{"key":"e_1_3_2_1_40_1","volume-title":"Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199","author":"Szegedy Christian","year":"2013","unstructured":"Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 ( 2013 ). Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)."},{"key":"e_1_3_2_1_41_1","volume-title":"Ensemble Adversarial Training: Attacks and Defenses. arXiv preprint arXiv:1705.07204","author":"Tram\u00e8r Florian","year":"2017","unstructured":"Florian Tram\u00e8r , Alexey Kurakin , Nicolas Papernot , Dan Boneh , and Patrick McDaniel . 2017. Ensemble Adversarial Training: Attacks and Defenses. arXiv preprint arXiv:1705.07204 ( 2017 ). Florian Tram\u00e8r, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. arXiv preprint arXiv:1705.07204 (2017)."},{"key":"e_1_3_2_1_42_1","volume-title":"Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint arXiv:1704.01155","author":"Xu Weilin","year":"2017","unstructured":"Weilin Xu , David Evans , and Yanjun Qi . 2017 . Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint arXiv:1704.01155 (2017). Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint arXiv:1704.01155 (2017)."},{"key":"e_1_3_2_1_43_1","volume-title":"Feature Squeezing Mitigates and Detects Carlini\/Wagner Adversarial Examples. arXiv preprint arXiv:1705.10686","author":"Xu Weilin","year":"2017","unstructured":"Weilin Xu , David Evans , and Yanjun Qi. 2017. Feature Squeezing Mitigates and Detects Carlini\/Wagner Adversarial Examples. arXiv preprint arXiv:1705.10686 ( 2017 ). Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature Squeezing Mitigates and Detects Carlini\/Wagner Adversarial Examples. arXiv preprint arXiv:1705.10686 (2017)."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Valentina Zantedeschi Maria-Irina Nicolae and Ambrish Rawat. 2017. Efficient Defenses Against Adversarial Attacks. arXiv preprint arXiv:1707.06728. Valentina Zantedeschi Maria-Irina Nicolae and Ambrish Rawat. 2017. Efficient Defenses Against Adversarial Attacks. arXiv preprint arXiv:1707.06728.","DOI":"10.1145\/3128572.3140449"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"crossref","unstructured":"Stephan Zheng Yang Song Thomas Leung and Ian Goodfellow. 2016. Improving the robustness of deep neural networks via stability training Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4480--4488. Stephan Zheng Yang Song Thomas Leung and Ian Goodfellow. 2016. Improving the robustness of deep neural networks via stability training Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4480--4488.","DOI":"10.1109\/CVPR.2016.485"}],"event":{"name":"CCS '17: 2017 ACM SIGSAC Conference on Computer and Communications Security","location":"Dallas Texas USA","acronym":"CCS '17","sponsor":["SIGSAC ACM Special Interest Group on Security, Audit, and Control"]},"container-title":["Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3128572.3140448","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3128572.3140448","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,7]],"date-time":"2023-01-07T14:09:27Z","timestamp":1673100567000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3128572.3140448"}},"subtitle":["Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models"],"short-title":[],"issued":{"date-parts":[[2017,11,3]]},"references-count":45,"alternative-id":["10.1145\/3128572.3140448","10.1145\/3128572"],"URL":"https:\/\/doi.org\/10.1145\/3128572.3140448","relation":{},"subject":[],"published":{"date-parts":[[2017,11,3]]},"assertion":[{"value":"2017-11-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}