Workshop on Security and Privacy in Standardized IoT (SDIoTSec) 2024 Program - NDSS Symposium

Monday, 26 February

  • 13:30 - 13:35
    Opening Remarks
    Cockatoo Room
  • 13:35 - 14:30
    Keynote #1 by Dr. Michael J. Fagan
    Cockatoo Room
    • IoT technologies bridge domains to create innovative solutions, but this can shift trust balances and strain cybersecurity and privacy. Since humans are commonly the beneficiaries or targets of IoT systems, concerns about privacy (and safety) may be heightened. Also, IoT can both have more sensitive position in a network and fewer power, computing, etc. resources than other equipment (i.e., is constrained). Towards solving these challenges, IoT can leverage existing standards, but new standards are needed for at least some cases. Of course, cybersecurity and privacy management is technology agnostic and standards for these domains certain apply to IoT, but especially for the cybersecurity practitioner, realities of IoT (e.g., constraints) can break expectations built into the standards or how they are generally understood and used. Today, standards and national efforts around cybersecurity and privacy of IoT abound. Notable examples in the United States are the Cybersecurity Improvement Act and CyberTrust Mark cybersecurity labeling program for consumer IoT. Globally, multiple nations are exploring their own labeling programs, including, but not limited to Singapore and Japan. In the European Union, efforts are underway to ensure the cybersecurity of IoT products via the Cyber Resiliency Act. In the standards space, we can look to solutions from IETF for device intent signaling and device on-boarding, among other topics and efforts such as 27400 series from ISO. These efforts are welcome since IoT adoption depends on delivering solutions that preserve cybersecurity and privacy. Research and then standards can help bridge these gaps and inform efforts to raise the bar of cybersecurity and privacy for IoT across all sectors since doing so can motivate trust in and adoption of the technology.

      Speaker's Biography: Michael Fagan is a Computer Scientist and Technical Lead with the Cybersecurity for IoT Program which aims to develop guidance towards improving the cybersecurity of IoT devices and systems. The program works within the National Institute of Standards and Technology’s Information Technology Laboratory (ITL) and supports the development and application of standards, guidelines, and related tools to improve the cybersecurity of IoT systems, products, connected devices and the environments in which they are deployed. By collaborating with stakeholders across government, industry, international bodies, academia, and consumers, the program aims to cultivate trust and foster an environment that enables innovation on a global scale. Michael leads work exploring IoT cybersecurity in specific sectors or use cases, such as enterprise systems, the federal government, and consumer home networks. He holds a Ph.D. in Computer Science & Engineering.

    14:30 - 15:10
    Session 1: Security and Privacy in the Matter Protocol and Standard
    Cockatoo Room
    • Ravindra Mangar (Dartmouth College) Jingyu Qian (University of Illinois), Wondimu Zegeye (Morgan State University), Abdulrahman AlRabah, Ben Civjan, Shalni Sundram, Sam Yuan, Carl A. Gunter (University of Illinois), Mounib Khanafer (American University of Kuwait), Kevin Kornegay (Morgan State University), Timothy J. Pierson, David Kotz (Dartmouth College)

      As the integration of smart devices into our daily environment accelerates, the vision of a fully integrated smart home is becoming more achievable through standards such as the Matter protocol. In response, cWe built a testbed and introduce a network utility device, designed to sniff network traffic and provide a wireless access point within IoT networks. This paper also presents the experience of students using the testbed in an academic scenario.

    • Song Liao, Jingwen Yan, Long Cheng (Clemson University)

      The rapid evolution of Internet of Things (IoT) technologies allows users to interact with devices in a smart home environment. In an effort to strengthen the connectivity of smart devices across diverse vendors, multiple leading device manufacturers developed the Matter standard, enabling users to control devices from different sources seamlessly. However, the interoperability introduced by Matter poses new challenges to user privacy and safety. In this paper, we propose the Hidden Eavesdropping Attack in Matter-enabled smart home systems by exploiting the vulnerabilities in the Matter device pairing process and delegation phase. Our investigation of the Matter device pairing process reveals the possibility of unauthorized delegation. Furthermore, such delegation can grant unauthorized Matter hubs (i.e., hidden hubs) the capability to eavesdrop on other IoT devices without the awareness of device owners. Meanwhile, the implementation flaws from companies in device management complicate the task of device owners in identifying such hidden hubs. The disclosed sensitive data about devices, such as the status of door locks, can be leveraged by malicious attackers to deduce users’ activities, potentially leading to security breaches and safety issues.

    • Haoqiang Wang (Chinese Academy of Sciences, University of Chinese Academy of Sciences, Indiana University Bloomington), Yichen Liu (Indiana University Bloomington), Yiwei Fang, Ze Jin, Qixu Liu (Chinese Academy of Sciences, University of Chinese Academy of Sciences, Indiana University Bloomington), Luyi Xing (Indiana University Bloomington)

      The Matter protocol is a new communication standard for smart home devices, aiming to enhance interoperability and compatibility among different vendors. However, vendors may encounter unanticipated security issues during development and deployment phases centered around the Matter protocol. In this paper, we focus on examining vulnerabilities within Apple Home framework when implementing the Matter protocol, identifying several attack scenarios that can exploit these vulnerabilities to perform unauthorized actions and conceal their identities. We also compare the design of Apple Home with Google Home, highlighting the differences and implications for security. We reported these vulnerabilities to related vendors, which have been acknowledged by Connectivity Standards Alliance (CSA). Our work reveals the challenges and risks associated with adopting the Matter protocol, and provides suggestions for improving its security design and implementation.

  • 15:10 - 15:40
    Afternoon Coffee Break
    Boardroom with Foyer
  • 15:40 - 16:35
    Keynote #2 by Dr. Gary McGraw
    Cockatoo Room
    • Dr. Gary McGraw, Berryville Institute of Machine Learning

      I present the results of an architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks previously identified by BIML in 2020. After a brief level-set, I cover the top 10 LLM risks, then detail 23 black box LLM foundation model risks screaming out for regulation, finally providing a bird’s eye view of all 81 LLM risks BIML identified. BIML’s first work, published in January 2020 presented an in-depth ARA of a generic machine learning process model, identifying 78 risks. In this talk, I consider a more specific type of machine learning use case—large language models—and report the results of a detailed ARA of LLMs. This ARA serves two purposes: 1) it shows how our original BIML-78 can be adapted to a more particular ML use case, and 2) it provides a detailed accounting of LLM risks. At BIML, we are interested in “building security in” to ML systems from a security engineering perspective. Securing a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This ARA is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.

      Speaker's Biography: Gary McGraw is co-founder of the Berryville Institute of Machine Learning where his work focuses on machine learning security. He is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 100 peer-reviewed scientific publications. Gary serves on the Advisory Boards of Calypso AI, Legit, Irius Risk, Maxmyinterest, and Red Sift. He has also served as a Board member of Cigital and Codiscope (acquired by Synopsys) and as Advisor to CodeDX (acquired by Synopsys), Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). Gary produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering.

  • 16:35 - 16:40
    Best Paper Award
    Cockatoo Room
  • 16:40 - 17:30
    Session 2: Enhancing Security and Privacy in Heterogeneous IoT
    Cockatoo Room
    • Atheer Almogbil, Momo Steele, Sofia Belikovetsky (Johns Hopkins University), Adil Inam (University of Illinois at Urbana-Champaign), Olivia Wu (Johns Hopkins University), Aviel Rubin (Johns Hopkins University), Adam Bates (University of Illinois at Urbana-Champaign)

      The rise in the adoption of Internet of Things (IoT) has led to a surge in information generation and collection. Many IoT devices systematically collect sensitive data pertaining to users’ personal lives such as user activity, location, and communication. Prior works have focused on uncovering user privacy and profiling concerns in the context of one or two specific devices and threat models. However, user profiling concerns within a complete smart home ecosystem, under various threat models, have not been explored. In this work, we aim to analyze the privacy and user-profiling concerns in smart home environments under varying levels of threat models. We contribute an analysis of various IoT attacks existing in literature that enable an adversary to access data on IoT devices. Based on this analysis, we identify user behavior based on data accessed by such attacks. Our work reveals the extent to which an adversary can monitor user behavior based on information collected from smart households under varying threat models.

    • Konrad-Felix Krentz (Uppsala University), Thiemo Voigt (Uppsala University, RISE Computer Science)

      Object Security for Constrained RESTful Environments (OSCORE) is an end-to-end security solution for the Constrained Application Protocol (CoAP), which, in turn, is a lightweight application layer protocol for the Internet of things (IoT). The recently standardized Echo option allows OSCORE servers to check if a request was created recently. Previously, OSCORE only offered a counter-based replay protection, which is why delayed OSCORE requests were accepted as fresh. However, the Echo-based replay protection entails an additional round trip, thereby prolonging delays, increasing communication overhead, and deteriorating reliability. Moreover, OSCORE remains vulnerable to a denial-of-sleep attack. In this paper, we propose a version of OSCORE with a revised replay protection, namely OSCORE next-generation (OSCORE-NG). OSCORENG fixes OSCORE’s denial-of-sleep vulnerability and provides freshness guarantees that surpass those of the Echo-based replay protection, while dispensing with an additional round trip. Furthermore, in long-running sessions, OSCORE-NG incurs even less communication overhead than OSCORE’s counter-based replay protection. OSCORE-NG’s approach is to entangle timestamps in nonces. Except during synchronization, CoAP nodes truncate these timestamps in outgoing OSCORE-NG messages. Receivers fail to restore a timestamp if and only if an OSCORE-NG message is delayed by more than 7.848s in our implementation by default. In effect, older OSCORE-NG messages get rejected.

    • Olsan Ozbay (Dept. ECE, University of Maryland), Yuntao Liu (ISR, University of Maryland), Ankur Srivastava (Dept. ECE, ISR, University of Maryland)

      Electromagnetic (EM) side channel attacks (SCA) have been very powerful in extracting secret information from hardware systems. Existing attacks usually extract discrete values from the EM side channel, such as cryptographic key bits and operation types. In this work, we develop an EM SCA to extract continuous values that are being used in an averaging process, a common operation used in federated learning. A convolutional neural network (CNN) framework is constructed to analyze the collected EM data. Our results show that our attack is able to distinguish the distributions of the underlying data with up to 93% accuracy, indicating that applications previously considered as secure, such as federated learning, should be protected from EM side-channel attacks in their implementation.

    • Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

      As the integration of Internet of Things devices continues to increase, the security challenges associated with autonomous, self-executing Internet of Things devices become increasingly critical. This research addresses the vulnerability of deep learning-based malware threat-hunting models, particularly in the context of Industrial Internet of Things environments. The study introduces an innovative adversarial machine learning attack model tailored for generating adversarial payloads at the bytecode level of executable files.

      Our investigation focuses on the Malconv malware threat hunting model, employing the Fast Gradient Sign methodology as the attack model to craft adversarial instances. The proposed methodology is systematically evaluated using a comprehensive dataset sourced from instances of cloud-edge Internet of Things malware. The empirical findings reveal a significant reduction in the accuracy of the malware threat-hunting model, plummeting from an initial 99% to 82%. Moreover, our proposed approach sheds light on the effectiveness of adversarial attacks leveraging code repositories, showcasing their ability to evade AI-powered malware threat-hunting mechanisms.

      This work not only offers a practical solution for bolstering deep learning-based malware threat-hunting models in Internet of Things environments but also underscores the pivotal role of code repositories as a potential attack vector. The outcomes of this investigation emphasize the imperative need to recognize code repositories as a distinct attack surface within the landscape of malware threat-hunting models deployed in the Internet of Things environments.

    • Raushan Kumar Singh (IIT Ropar), Sudeepta Mishra (IIT Ropar)

      Modern technology is advancing on many different levels, and the battlefield is no exception. India has 15000 km of lengthy land borders shared with many other neighboring countries, and only 5 of the 29 states in India do not have any shared international borders or coastlines. Wire fences and conventional sensor-based systems are used to protect terrestrial borders. Wire fences, being the only line of defense against intrusions at most unmanned borders, result in frequent cases of unreported incursion, smuggling, and human trafficking. Typically, intruders cut the fence to gain access to Indian land, and sensor-based systems are prone to false alarms due to animal movements. We propose combining the intelligence of Tiny Machine Learning (TinyML) with the communication capability of IoT to make borders safer and intrusion more challenging. To learn the typical fence movements from natural causes, we use TinyML. Our learning technique is created explicitly to differentiate between regular fence movement and suspicious fence disturbance. The system is efficient enough to detect metal fence cuts and trespassing carefully. With the aid of online learning environments, the sophisticated TinyML microcontroller’s built-in accelerometer can differentiate between different movement patterns. To identify the most effective defense against sensor-level attacks, we conducted tests to gauge the tolerance levels of conventional microcontroller sensor systems against TinyML-powered microcontrollers when exposed to Electromagnetic Pulse (EMP) based sensor hacking attempts. To the best of our knowledge, this is the first research conducted for the Identification of the best suite sensor system for high-precision Internet of Battlefield Things (IoBT) applications. During the real-time model test, the system is found to be 95.4% accurate and readily deployable on TinyML microcontrollers.