ASMR-Bench: Auditing for Sabotage in ML Research

2026-04-17Artificial Intelligence

Artificial Intelligence
AI summary

The authors created ASMR-Bench, a test set to see how well people and AI can spot sneaky changes in machine learning research code that mess up results without being obvious. These sneaky changes keep the main method the same but tweak details like training data or parameters. They found that both humans using AI tools and advanced AI models had trouble consistently finding these problems. The authors also showed that AI-created sneaky changes were usually easier to catch than those made by humans. They provide ASMR-Bench to help improve ways to audit AI-led research.

machine learningcode sabotageauditinglarge language modelshyperparameterstraining dataevaluation codeAUROCred teaming
Authors
Eric Gan, Aryan Bhatt, Buck Shlegeris, Julian Stastny, Vivek Hebbar
Abstract
As AI systems are increasingly used to conduct research autonomously, misaligned systems could introduce subtle flaws that produce misleading results while evading detection. We introduce ASMR-Bench (Auditing for Sabotage in ML Research), a benchmark for evaluating the ability of auditors to detect sabotage in ML research codebases. ASMR-Bench consists of 9 ML research codebases with sabotaged variants that produce qualitatively different experimental results. Each sabotage modifies implementation details, such as hyperparameters, training data, or evaluation code, while preserving the high-level methodology described in the paper. We evaluated frontier LLMs and LLM-assisted human auditors on ASMR-Bench and found that both struggled to reliably detect sabotage: the best performance was an AUROC of 0.77 and a top-1 fix rate of 42%, achieved by Gemini 3.1 Pro. We also tested LLMs as red teamers and found that LLM-generated sabotages were weaker than human-generated ones but still sometimes evaded same-capability LLM auditors. We release ASMR-Bench to support research on monitoring and auditing techniques for AI-conducted research.