My AI Snake Oil book with
Arvind Narayanan is now published.
The book looks
critically at what AI can and cannot do. We're sharing our ideas through substack with over
40,000 readers. Subscribe here.
I am a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy and a co-author of AI Snake Oil. My research focuses on the societal impact of AI. I have previously worked on AI in the industry and academia at Facebook, Columbia University, and EPFL Switzerland. I am a recipient of a best paper award at ACM FAccT, an impact recognition award at ACM CSCW, and was included in TIME's inaugural list of the 100 most influential people in AI.
I have investigated and offered mitigations for the reproducibility crisis in machine-learning-based science.
My recent work looks at contemporary questions in AI safety through an evidence-based perspective. I have looked at the impact of open foundation models, the need for researcher access, and the futility of focusing on model-level interventions in meaningfully improving safety.
I co-authored the AI Snake Oil book to help distinguish between applications of AI that work and those that don't. My blog by the same name has 40,000+ readers. I have closely investigated the harmful impacts of predictive optimization for decision making.
I have helped uncover several errors and conceptual shortcomings in AI evaluations. My recent work proposes fixes and benchmarks to improve evaluations of generative AI systems and agents.
While the societal impact of foundation models is rising, transparency is on the decline. My work aims to increase transparency into the impacts of AI through concrete and rigorous transparency reporting.
I aim to bridge the gap between technology and policymakers. To that end, an explicit goal of my research program is to influence and inform evidence-based AI policy. My work has been cited in numerous government reports and other outputs.