Vibe Econometrics and the Analysis Contract
2026-05-08 • Human-Computer Interaction
Human-Computer Interaction
AI summaryⓘ
The authors explain that when AI helps with scientific analysis, it not only makes tools easier to use but also spreads the ways these tools can fail. Specifically, they discuss 'vibe methodology,' where AI outputs can look correct but hide mistakes that need expert knowledge to detect. They highlight risks like AI making weak assumptions seem strong and propose a solution called the Analysis Contract, which sets clear rules to check data and methods before trusting AI-powered conclusions. This approach aims to keep AI-assisted research honest and reliable across different fields.
AI-assisted methodologyvibe codingvibe inferencecausal analysiseconometricsmethod-data mismatchconfidence launderingpre-analysis planscausal roadmapdata audit
Authors
Lydia Ashton
Abstract
"Vibe coding" and "vibe analytics" have been framed as a democratization of technical capability. This paper argues that AI-assisted methodology more broadly, or what I call "vibe methodology," also democratizes the failure modes specific to each domain. When AI assists with methods whose validity depends on assumptions that cannot be verified from the output alone (a class I call "vibe inference"), the failure surface is structurally different: the output does not reliably signal invalidity, and when it does, recognizing the signal requires the expertise the workflow bypasses. I focus on "vibe econometrics," the subset of AI-assisted causal analysis where identification can be named faster than it can be audited. The claim of this paper is not that AI invents inferential failures that did not previously exist, but that it changes their incidence, observability, and persuasive force enough to create a practically distinct governance problem. This results in three failure modes: method-data mismatch, where AI bypasses expertise at execution; confidence laundering, where AI amplifies the credibility of formatted output; and invisible forking, which spans both. What is new is not the failure modes but AI's industrialization of their packaging. The barrier between naming a method and executing it has collapsed, and weak foundations, dressed as rigorous analysis, now reach audiences at a scale, speed, and polish that previously required expertise. I propose the Analysis Contract, a pre-commitment framework that adapts the logic of pre-analysis plans and the Causal Roadmap to the AI-assisted setting. The contract imposes three conditions before a causal claim is made: a method-data contract, a data audit, and a pre-commitment statement defining what would count as a disconfirming result. The framework generalizes across domains of vibe inference through domain-specific instantiation.