


default search action
Stephen Casper
Person information
SPARQL queries 
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i37]Fazl Barez, Tingchen Fu, Ameya Prabhu, Stephen Casper, Amartya Sanyal, Adel Bibi, Aidan O'Gara, Robert Kirk, Ben Bucknall, Tim Fist, Luke Ong, Philip Torr, Kwok-Yan Lam, Robert Trager, David Krueger, Sören Mindermann, José Hernández-Orallo, Mor Geva, Yarin Gal:
Open Problems in Machine Unlearning for AI Safety. CoRR abs/2501.04952 (2025) - [i36]Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Isaac Bloom, Stella Biderman, Adrià Garriga-Alonso, Arthur Conmy, Neel Nanda, Jessica Rumbelow, Martin Wattenberg, Nandi Schoots, Joseph Miller, Eric J. Michaud, Stephen Casper, Max Tegmark, William Saunders, David Bau, Eric Todd, Atticus Geiger, Mor Geva, Jesse Hoogland, Daniel Murfet, Tom McGrath:
Open Problems in Mechanistic Interpretability. CoRR abs/2501.16496 (2025) - [i35]Yoshua Bengio, Sören Mindermann, Daniel Privitera, Tamay Besiroglu, Rishi Bommasani, Stephen Casper, Yejin Choi, Philip Fox, Ben Garfinkel, Danielle Goldfarb, Hoda Heidari, Anson Ho, Sayash Kapoor, Leila Khalatbari, Shayne Longpre, Sam Manning, Vasilios Mavroudis, Mantas Mazeika, Julian Michael, Jessica Newman, Kwan Yee Ng, Chinasa T. Okolo, Deborah Raji, Girish Sastry, Elizabeth Seger, Theodora Skeadas, Tobin South, Emma Strubell, Florian Tramèr, Lucia Velasco, Nicole Wheeler, Daron Acemoglu, Olubayo Adekanmbi, David Dalrymple, Thomas G. Dietterich, Edward W. Felten, Pascale Fung, Pierre-Olivier Gourinchas, Fredrik Heintz, Geoffrey E. Hinton, Nick R. Jennings, Andreas Krause, Susan Leavy, Percy Liang, Teresa Ludermir, Vidushi Marda, Helen Margetts, John A. McDermid, Jane Munga, Arvind Narayanan, Alondra Nelson, Clara Neppel, Alice Oh, Gopal Ramchurn, Stuart Russell, Marietje Schaake, Bernhard Schölkopf, Dawn Song, Alvaro Soto, Lee Tiedrich, Gaël Varoquaux, Andrew Yao, Ya-Qin Zhang, Fahad Albalawi, Marwan Alserkal, Olubunmi Ajala, Guillaume Avrin, Christian Busch, André Carlos Ponce de Leon Ferreira de Carvalho, Bronwyn Fox, Amandeep Singh Gill, Ahmet Halit Hatip, Juha Heikkilä, Gill Jolly, Ziv Katzir, Hiroaki Kitano, Antonio Krüger, Chris Johnson, Saif M. Khan, Kyoung Mu Lee, Dominic Vincent Ligot, Oleksii Molchanovskyi, Andrea Monti, Nusu Mwamanzi, Mona Nemer, Nuria Oliver, José Ramón López Portillo, Balaraman Ravindran, Raquel Pezoa Rivera, Hammam Riza, Crystal Rugege, Ciarán Seoighe, Jerry Sheehan, Haroon Sheikh, Denise Wong, Yi Zeng:
International AI Safety Report. CoRR abs/2501.17805 (2025) - [i34]Stephen Casper, Luke Bailey, Rosco Hunter, Carson Ezell, Emma Cabalé, Michael Gerovitch, Stewart Slocum, Kevin Wei, Nikola Jurkovic, Ariba Khan, Phillip J. K. Christoffersen, A. Pinar Ozisik, Rakshit Trivedi, Dylan Hadfield-Menell, Noam Kolt:
The AI Agent Index. CoRR abs/2502.01635 (2025) - [i33]Zora Che, Stephen Casper, Robert Kirk, Anirudh Satheesh, Stewart Slocum, Lev E. McKinney, Rohit Gandikota, Aidan Ewart, Domenic Rosati, Zichu Wu, Zikui Cai, Bilal Chughtai, Yarin Gal, Furong Huang, Dylan Hadfield-Menell:
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities. CoRR abs/2502.05209 (2025) - [i32]Stephen Casper, David Krueger, Dylan Hadfield-Menell:
Pitfalls of Evidence-Based AI Policy. CoRR abs/2502.09618 (2025) - [i31]Leo Schwinn, Yan Scholten, Tom Wollschläger, Sophie Xhonneux, Stephen Casper, Stephan Günnemann, Gauthier Gidel:
Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives. CoRR abs/2502.11910 (2025) - [i30]Stephen Casper, Luke Bailey, Tim Schreier:
Practical Principles for AI Cost and Compute Accounting. CoRR abs/2502.15873 (2025) - 2024
- [c7]Stephen Casper
, Carson Ezell
, Charlotte Siegmann
, Noam Kolt
, Taylor Lynn Curtis
, Benjamin Bucknall
, Andreas A. Haupt
, Kevin Wei
, Jérémy Scheurer
, Marius Hobbhahn
, Lee Sharkey
, Satyapriya Krishna
, Marvin Von Hagen
, Silas Alberti
, Alan Chan
, Qinyi Sun
, Michael Gerovitch
, David Bau
, Max Tegmark
, David Krueger
, Dylan Hadfield-Menell
:
Black-Box Access is Insufficient for Rigorous AI Audits. FAccT 2024: 2254-2272 - [i29]Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Alexander Haupt, Kevin Wei, Jérémy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen
, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell:
Black-Box Access is Insufficient for Rigorous AI Audits. CoRR abs/2401.14446 (2024) - [i28]Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Xiaojun Xu, Yuguang Yao, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, Yang Liu:
Rethinking Machine Unlearning for Large Language Models. CoRR abs/2402.08787 (2024) - [i27]Aengus Lynch, Phillip Guo, Aidan Ewart, Stephen Casper, Dylan Hadfield-Menell:
Eight Methods to Evaluate Robust Unlearning in LLMs. CoRR abs/2402.16835 (2024) - [i26]Stephen Casper, Lennart Schulze, Oam Patel, Dylan Hadfield-Menell:
Defending Against Unforeseen Failure Modes with Latent Adversarial Training. CoRR abs/2403.05030 (2024) - [i25]Stephen Casper, Jieun Yun, Joonhyuk Baek, Yeseong Jung, Minhwan Kim, Kiwan Kwon, Saerom Park, Hayden Moore, David Shriver, Marissa Connor, Keltin Grimes, Angus Nicolson, Arush Tagade, Jessica Rumbelow, Hieu Minh Nguyen, Dylan Hadfield-Menell:
The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability. CoRR abs/2404.02949 (2024) - [i24]Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, Benjamin L. Edelman, Zhaowei Zhang, Mario Günther, Anton Korinek, José Hernández-Orallo, Lewis Hammond, Eric J. Bigelow, Alexander Pan, Lauro Langosco, Tomasz Korbak, Heidi Zhang, Ruiqi Zhong, Seán Ó hÉigeartaigh, Gabriel Recchia, Giulio Corsi, Alan Chan, Markus Anderljung, Lilian Edwards, Yoshua Bengio, Danqi Chen, Samuel Albanie, Tegan Maharaj, Jakob N. Foerster, Florian Tramèr, He He, Atoosa Kasirzadeh, Yejin Choi, David Krueger:
Foundational Challenges in Assuring Alignment and Safety of Large Language Models. CoRR abs/2404.09932 (2024) - [i23]Anka Reuel, Ben Bucknall, Stephen Casper, Tim Fist, Lisa Soder, Onni Aarne, Lewis Hammond, Lujain Ibrahim, Alan Chan, Peter Wills, Markus Anderljung, Ben Garfinkel, Lennart Heim, Andrew Trask, Gabriel Mukobi, Rylan Schaeffer, Mauricio Baker, Sara Hooker, Irene Solaiman, Alexandra Sasha Luccioni, Nitarshan Rajkumar, Nicolas Moës, Jeffrey Ladish, Neel Guha, Jessica Newman, Yoshua Bengio, Tobin South, Alex Pentland, Sanmi Koyejo, Mykel J. Kochenderfer, Robert Trager:
Open Problems in Technical AI Governance. CoRR abs/2407.14981 (2024) - [i22]Abhay Sheshadri, Aidan Ewart, Phillip Guo, Aengus Lynch, Cindy Wu, Vivek Hebbar, Henry Sleight, Asa Cooper Stickland, Ethan Perez, Dylan Hadfield-Menell, Stephen Casper:
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs. CoRR abs/2407.15549 (2024) - [i21]Peter Slattery, Alexander K. Saeri, Emily A. C. Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush Pour, Stephen Casper, Neil Thompson:
The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence. CoRR abs/2408.12622 (2024) - [i20]Zhonghao He, Jascha Achterberg, Katie Collins, Kevin K. Nejad, Danyal Akarca, Yinzhu Yang, Wes Gurnee, Ilia Sucholutsky, Yuhan Tang, Rebeca Ianov, George Ogden, Chole Li, Kai Sandbrink, Stephen Casper, Anna Ivanova, Grace W. Lindsay:
Multilevel Interpretability Of Artificial Neural Networks: Leveraging Framework And Methods From Neuroscience. CoRR abs/2408.12664 (2024) - [i19]Nathalie Maria Kirch, Severin Field, Stephen Casper:
What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks. CoRR abs/2411.03343 (2024) - [i18]Aidan Peppin, Anka Reuel, Stephen Casper, Elliot Jones, Andrew Strait, Usman Anwar, Anurag Agrawal, Sayash Kapoor, Sanmi Koyejo, Marie Pellat, Rishi Bommasani, Nick Frosst, Sara Hooker:
The Reality of AI and Biorisk. CoRR abs/2412.01946 (2024) - [i17]Yoshua Bengio, Sören Mindermann, Daniel Privitera, Tamay Besiroglu, Rishi Bommasani, Stephen Casper, Yejin Choi, Danielle Goldfarb, Hoda Heidari, Leila Khalatbari, Shayne Longpre, Vasilios Mavroudis, Mantas Mazeika, Kwan Yee Ng, Chinasa T. Okolo, Deborah Raji, Theodora Skeadas, Florian Tramèr, Bayo Adekanmbi, Paul F. Christiano, David Dalrymple, Thomas G. Dietterich, Edward W. Felten, Pascale Fung, Pierre-Olivier Gourinchas, Nick R. Jennings, Andreas Krause, Percy Liang, Teresa Ludermir, Vidushi Marda, Helen Margetts, John A. McDermid, Arvind Narayanan, Alondra Nelson, Alice Oh, Gopal Ramchurn, Stuart Russell, Marietje Schaake, Dawn Song, Alvaro Soto, Lee Tiedrich, Gaël Varoquaux, Andrew Yao, Ya-Qin Zhang:
International Scientific Report on the Safety of Advanced AI (Interim Report). CoRR abs/2412.05282 (2024) - [i16]Luke Bailey, Alex Serrano, Abhay Sheshadri, Mikhail Seleznyov, Jordan Taylor
, Erik Jenner, Jacob Hilton, Stephen Casper, Carlos Guestrin, Scott Emmons:
Obfuscated Activations Bypass LLM Latent-Space Defenses. CoRR abs/2412.09565 (2024) - 2023
- [j1]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca D. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell:
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. Trans. Mach. Learn. Res. 2023 (2023) - [c6]Stephen Casper, Dylan Hadfield-Menell, Gabriel Kreiman:
White-Box Adversarial Policies in Deep Reinforcement Learning. SafeAI@AAAI 2023 - [c5]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? EMNLP 2023: 4791-4797 - [c4]Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, Dylan Hadfield-Menell:
Red Teaming Deep Neural Networks with Feature Synthesis Tools. NeurIPS 2023 - [c3]Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. SaTML 2023: 464-483 - [i15]Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Dylan Hadfield-Menell:
Benchmarking Interpretability Tools for Deep Neural Networks. CoRR abs/2302.10894 (2023) - [i14]Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell:
Explore, Establish, Exploit: Red Teaming Language Models from Scratch. CoRR abs/2306.09442 (2023) - [i13]Stephen Casper, Zifan Guo, Shreya Mogulothu, Zachary Marinov, Chinmay Deshpande, Rui-Jie Yew, Zheng Dai, Dylan Hadfield-Menell:
Measuring the Success of Diffusion Models at Imitating Human Artists. CoRR abs/2307.04028 (2023) - [i12]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Tong Wang, Samuel Marks, Charbel-Raphaël Ségerie, Micah Carroll, Andi Peng, Phillip J. K. Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Biyik, Anca D. Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell:
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback. CoRR abs/2307.15217 (2023) - [i11]Rusheb Shah, Quentin Feuillade-Montixi, Soroush Pour, Arush Tagade, Stephen Casper, Javier Rando:
Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation. CoRR abs/2311.03348 (2023) - [i10]Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas:
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness? CoRR abs/2312.03729 (2023) - 2022
- [c2]Stephen Casper, Max Nadeau, Dylan Hadfield-Menell, Gabriel Kreiman:
Robust Feature-Level Adversaries are Interpretability Tools. NeurIPS 2022 - [i9]Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. CoRR abs/2207.13243 (2022) - [i8]Stephen Casper, Dylan Hadfield-Menell, Gabriel Kreiman:
White-Box Adversarial Policies in Deep Reinforcement Learning. CoRR abs/2209.02167 (2022) - [i7]Stephen Casper, Kaivalya Hariharan, Dylan Hadfield-Menell:
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks. CoRR abs/2211.10024 (2022) - 2021
- [c1]Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman:
Frivolous Units: Wider Networks Are Not Really That Wide. AAAI 2021: 6921-6929 - [i6]Daniel Filan, Stephen Casper, Shlomi Hod, Cody Wild, Andrew Critch, Stuart Russell:
Clusterability in Neural Networks. CoRR abs/2103.03386 (2021) - [i5]Stephen Casper, Max Nadeau, Gabriel Kreiman:
One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features. CoRR abs/2110.03605 (2021) - [i4]Shlomi Hod, Stephen Casper, Daniel Filan, Cody Wild, Andrew Critch, Stuart Russell:
Detecting Modularity in Deep Neural Networks. CoRR abs/2110.08058 (2021) - 2020
- [i3]Abdelrhman Saleh, Tovly Deutsch, Stephen Casper, Yonatan Belinkov, Stuart M. Shieber:
Probing Neural Dialog Models for Conversational Understanding. CoRR abs/2006.08331 (2020) - [i2]Stephen Casper:
The Achilles Heel Hypothesis: Pitfalls for AI Systems via Decision Theoretic Adversaries. CoRR abs/2010.05418 (2020)
2010 – 2019
- 2019
- [i1]Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf
, Kasper Vinken, Gabriel Kreiman
:
Removable and/or Repeated Units Emerge in Overparametrized Deep Neural Networks. CoRR abs/1912.04783 (2019)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-03-22 01:12 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint