Hi! I'm Nat, a 4th-year undergrad at UC Berkeley 🧸. I've worked on research at
Scale AI and the
Center for AI Safety to evaluate the security and safety of ML systems.
LLM Defenses Are Not Robust to Multi-Turn Human Jailbreaks Yet Nathaniel Li, Ziwen Han, Ian Steneker, Willow Primack, Riley Goodside, Hugh Zhang, Zifan Wang, Cristina Menghini, Summer Yue NeurIPS 2024 Red Teaming Workshop (Oral) TL;DR ⬆️ / paper / website / dataset Current LLM defenses, which are remarkably robust against automated adversarial attacks, are not robust against humans who attack over multiple turns -- a more realistic threat model of malicious use in the real world``.
The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning Nathaniel Li*, Alexander Pan*, Anjali Gopal†, Summer Yue†, Daniel Berrios†, Alice Gatti‡, Justin D. Li‡, Ann-Kathrin Dombrowski‡, Shashwat Goel‡, Long Phan‡, Gabriel Mukobi, Nathan Helm-Burger, Rassin Lababidi, Lennart Justen, Andrew Bo Liu, Michael Chen, Isabelle Barrass, Oliver Zhang, Xiaoyuan Zhu, Rishub Tamirisa, Bhrugu Bharathi, Adam Khoja, Zhenqi Zhao, Ariel Herbert-Voss, Cort B. Breuer, Samuel Marks, Oam Patel, Andy Zou, Mantas Mazeika, Zifan Wang, Palash Oswal, Weiran Lin, Adam A. Hunt, Justin Tienken-Harder, Kevin Y. Shih, Kemper Talley, John Guan, Russell Kaplan, Ian Steneker, David Campbell, Brad Jokubaitis, Alex Levinson, Jean Wang, William Qian, Kallol Krishna Karmakar, Steven Basart, Stephen Fitz, Mindy Levine, Ponnurangam Kumaraguru, Uday Tupakula, Vijay Varadharajan, Yan Shoshitaishvili, Jimmy Ba, Kevin M. Esvelt, Alexandr Wang**, Dan Hendrycks** ICML 2024 TL;DR ⬆️ / paper / website / code / TIME / Scale AI blog / CAIS blog WMDP is a LLM benchmark surrounding hazardous knowledge in biosecurity, cybersecurity, and chemical security, for measuring and mitigating the risk of malicious use. Towards reducing risk, we introduce RMU, an unlearning method which reduces LLM performance on WMDP while retaining performance on general tasks.
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, David Forsyth, Dan Hendrycks ICML 2024 TL;DR ⬆️ / paper / website / code HarmBench is an evaluation framework for automated language model red teaming. We conduct a large-scale comparison of 18 red teaming methods and 33 target LLMs and defenses, proposing a highly efficient adversarial training method that enhances LLM robustness.
Representation Engineering: A Top-Down Approach to AI Transparency Andy Zou, Long Phan*, Sarah Chen*, James Campbell*, Phillip Guo*, Richard Ren*, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan Hendrycks arXiv Preprint TL;DR ⬆️ / paper / website / code Representation engineering (RepE) enhances LLM transparency by monitoring and manipulating high-level cognitive phenomena. RepE is effective in mitigating dishonesty, hallucination, and other unsafe behaviors.
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark Alexander Pan*, Chan Jun Shern*, Andy Zou*, Nathaniel Li, Steven Basart, Thomas Woodside, Jonathan Ng, Hanlin Zhang, Scott Emmons, Dan Hendrycks ICML 2023 (Oral) TL;DR ⬆️ / paper / website / code MACHIAVELLI is a benchmark of 134 text-based choose-your-own-adventure games with annotations of safety-relevant concepts such as deception, physical harm, and power-seeking, guiding development towards safe yet capable language agents.
About
I’m grateful to have been supervised by Dan Hendrycks and Summer Yue, and mentored by Alexander Pan, Cristina Menghini, Steven Basart, and Zifan Wang. Outside of work, I’m a big fan of overhang bouldering, Porter Robinson, legislative redistricting, and the United States!