Can Coding Agents Reproduce Findings in Computational Materials Science?

2026-05-01Software Engineering

Software EngineeringArtificial IntelligenceComputation and Language
AI summary

The authors created AutoMat, a test to see how well AI coding helpers can redo scientific computer experiments in materials science. These experiments need more than just coding—they require understanding complex steps and checking if results support scientific claims. The authors found that current AI helpers struggle, only succeeding about half the time, especially when they must figure out instructions just from reading papers. Their work highlights where these AI systems fail and sets a challenge to improve them in scientific research.

Large language modelsComputational workflowsMaterials scienceScientific reproducibilityCoding agentsBenchmarkMethodological deviationsExecution fragilityDomain-specific proceduresAI-for-science
Authors
Ziyang Huang, Yi Cao, Ali K. Shargh, Jing Luo, Ruidong Mei, Mohd Zaki, Zhan Liu, Wyatt Bunstine, William Jurayj, Somdatta Goswami, Tyrel McQueen, Michael Shields, Jaafar El-Awady, Paulette Clancy, Benjamin Van Durme, Nicholas Andrews, William Walden, Daniel Khashabi
Abstract
Large language models are increasingly deployed as autonomous coding agents and have achieved remarkably strong performance on software engineering benchmarks. However, it is unclear whether such success transfers to computational scientific workflows, where tasks require not only strong coding ability, but also the ability to navigate complex, domain-specific procedures and to interpret results in the context of scientific claims. To address this question, we present AutoMat, a benchmark for evaluating LLM-based agents' ability to reproduce claims from computational materials science. AutoMat poses three interrelated challenges: recovering underspecified computational procedures, navigating specialized toolchains, and determining whether the resulting evidence supports a claim. By working closely with subject matter experts, we curate a set of claims from real materials science papers to test whether coding agents can recover and execute the end-to-end workflow needed to support (or undermine) such claims. We then evaluate multiple representative coding agent settings across several foundation models. Our results show that current LLM-based agents obtain low overall success rates on AutoMat, with the best-performing setting achieving a success rate of only 54.1%. Error analysis further reveals that agents perform worst when workflows must be reconstructed from paper text alone and that they fail primarily due to incomplete procedures, methodological deviations, and execution fragility. Taken together, these findings position AutoMat as both a benchmark for computational scientific reproducibility and a tool for diagnosing the current limitations of agentic systems in AI-for-science settings.