Stacking up the LLM Risks: Applied Machine Learning Security - NDSS Symposium

Dr. Gary McGraw, Berryville Institute of Machine Learning

I present the results of an architectural risk analysis (ARA) of large language models (LLMs), guided by an understanding of standard machine learning (ML) risks previously identified by BIML in 2020. After a brief level-set, I cover the top 10 LLM risks, then detail 23 black box LLM foundation model risks screaming out for regulation, finally providing a bird’s eye view of all 81 LLM risks BIML identified. BIML’s first work, published in January 2020 presented an in-depth ARA of a generic machine learning process model, identifying 78 risks. In this talk, I consider a more specific type of machine learning use case—large language models—and report the results of a detailed ARA of LLMs. This ARA serves two purposes: 1) it shows how our original BIML-78 can be adapted to a more particular ML use case, and 2) it provides a detailed accounting of LLM risks. At BIML, we are interested in “building security in” to ML systems from a security engineering perspective. Securing a modern LLM system (even if what’s under scrutiny is only an application involving LLM technology) must involve diving into the engineering and design of the specific LLM system itself. This ARA is intended to make that kind of detailed work easier and more consistent by providing a baseline and a set of risks to consider.

Speaker's Biography: Gary McGraw is co-founder of the Berryville Institute of Machine Learning where his work focuses on machine learning security. He is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 100 peer-reviewed scientific publications. Gary serves on the Advisory Boards of Calypso AI, Legit, Irius Risk, Maxmyinterest, and Red Sift. He has also served as a Board member of Cigital and Codiscope (acquired by Synopsys) and as Advisor to CodeDX (acquired by Synopsys), Black Duck (acquired by Synopsys), Dasient (acquired by Twitter), Fortify Software (acquired by HP), and Invotas (acquired by FireEye). Gary produced the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine for thirteen years. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the Luddy School of Informatics, Computing, and Engineering.

View More Papers

Low-Quality Training Data Only? A Robust Framework for Detecting...

Yuqi Qing (Tsinghua University), Qilei Yin (Zhongguancun Laboratory), Xinhao Deng (Tsinghua University), Yihao Chen (Tsinghua University), Zhuotao Liu (Tsinghua University), Kun Sun (George Mason University), Ke Xu (Tsinghua University), Jia Zhang (Tsinghua University), Qi Li (Tsinghua University)

Read More

Using Behavior Monitoring to Identify Privacy Concerns in Smarthome...

Atheer Almogbil, Momo Steele, Sofia Belikovetsky (Johns Hopkins University), Adil Inam (University of Illinois at Urbana-Champaign), Olivia Wu (Johns Hopkins University), Aviel Rubin (Johns Hopkins University), Adam Bates (University of Illinois at Urbana-Champaign)

Read More

BliMe: Verifiably Secure Outsourced Computation with Hardware-Enforced Taint Tracking

Hossam ElAtali (University of Waterloo), Lachlan J. Gunn (Aalto University), Hans Liljestrand (University of Waterloo), N. Asokan (University of Waterloo, Aalto University)

Read More

5G-Spector: An O-RAN Compliant Layer-3 Cellular Attack Detection Service

Haohuang Wen (The Ohio State University), Phillip Porras (SRI International), Vinod Yegneswaran (SRI International), Ashish Gehani (SRI International), Zhiqiang Lin (The Ohio State University)

Read More