Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems - NDSS Symposium

Wei Jia (School of Cyber Science and Engineering, Huazhong University of Science and Technology), Zhaojun Lu (School of Cyber Science and Engineering, Huazhong University of Science and Technology), Haichun Zhang (Huazhong University of Science and Technology), Zhenglin Liu (Huazhong University of Science and Technology), Jie Wang (Shenzhen Kaiyuan Internet Security Co., Ltd), Gang Qu (University…

Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and have received a lot of attention recently. However, majority of the research on AEs is in the digital domain and the adversarial patches are static. Such research is very different from many real-world DNN applications such as Traffic Sign Recognition (TSR) systems in autonomous vehicles. In TSR systems, object detectors use DNNs to process streaming video in real time. From the view of object detectors, the traffic sign’s position and quality of the video are continuously changing, rendering the digital AEs ineffective in the physical world.

In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors. Robustness is achieved in three ways. First, we simulate the in-vehicle cameras by extending the distribution of image transformations with the blur transformation and the resolution transformation. Second, we design the single and multiple bounding boxes filters to improve the efficiency of the perturbation training. Third, we consider four representative attack vectors, namely Hiding Attack (HA), Appearance Attack (AA), Non-Target Attack (NTA) and Target Attack (TA). For each of them, a loss function is defined to minimize the impact of the fabrication process on the physical AEs.

We perform a comprehensive set of experiments under a variety of environmental conditions by varying the distance from $0m$ to $30m$, changing the angle from $-60^{circ}$ to $60^{circ}$, and considering illuminations in sunny and cloudy weather as well as at night. The experimental results show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based TSR system. The attacks have good transferability and can deceive other state-of-the-art object detectors. We launched HA and NTA on a brand-new 2021 model vehicle. Both attacks are successful in fooling the TSR system, which could be a lifethreatening case for autonomous vehicles. Finally, we discuss three defense mechanisms based on image preprocessing, AEs detection, and model enhancing.

View More Papers

datAFLow: Towards a Data-Flow-Guided Fuzzer

Adrian Herrera (Australian National University), Mathias Payer (EPFL), Antony Hosking (Australian National University)

Read More

DRAWN APART: A Device Identification Technique based on Remote...

Tomer Laor (Ben-Gurion Univ. of the Negev), Naif Mehanna (Univ. Lille, CNRS, Inria), Antonin Durey (Univ. Lille, CNRS, Inria), Vitaly Dyadyuk (Ben-Gurion Univ. of the Negev), Pierre Laperdrix (Univ. Lille, CNRS, Inria), Clémentine Maurice (Univ. Lille, CNRS, Inria), Yossi Oren (Ben-Gurion Univ. of the Negev), Romain Rouvoy (Univ. Lille, CNRS, Inria / IUF), Walter Rudametkin…

Read More

ScriptChecker: To Tame Third-party Script Execution With Task Capabilities

Wu Luo (Peking University), Xuhua Ding (Singapore Management University), Pengfei Wu (School of Computing, National University of Singapore), Xiaolei Zhang (Peking University), Qingni Shen (Peking University), Zhonghai Wu (Peking University)

Read More

Demo #14: In-Vehicle Communication Using Named Data Networking

Zachariah Threet (Tennessee Tech), Christos Papadopoulos (University of Memphis), Proyash Poddar (Florida International University), Alex Afanasyev (Florida International University), William Lambert (Tennessee Tech), Haley Burnell (Tennessee Tech), Sheikh Ghafoor (Tennessee Tech) and Susmit Shannigrahi (Tennessee Tech)

Read More