The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
2026-03-25 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors study how AI systems make sequential decisions in organizations, focusing on balancing reliability and the cost of human oversight. They introduce a mathematical framework that measures uncertainties and blind spots in AI decision steps within workflows. Using a large dataset from a business purchasing process, they find that even if AI actions look reliable at a simple level, more detailed context reveals significant uncertainty in next steps. Their approach helps estimate how accurate autonomous decisions are and what oversight humans need, aiming to improve practical AI deployment in complex organizational processes.
agentic AIsequential decision problemMarkov frameworkworkflow actionsstate blind-spotoversight costentropy-based escalationbusiness process intelligenceevent logs
Authors
Biplab Pal, Santanu Bhattacharya
Abstract
Agentic artificial intelligence (AI) in organizations is a sequential decision problem constrained by reliability and oversight cost. When deterministic workflows are replaced by stochastic policies over actions and tool calls, the key question is not whether a next step appears plausible, but whether the resulting trajectory remains statistically supported, locally unambiguous, and economically governable. We develop a measure-theoretic Markov framework for this setting. The core quantities are state blind-spot mass B_n(tau), state-action blind mass B^SA_{pi,n}(tau), an entropy-based human-in-the-loop escalation gate, and an expected oversight-cost identity over the workflow visitation measure. We instantiate the framework on the Business Process Intelligence Challenge 2019 purchase-to-pay log (251,734 cases, 1,595,923 events, 42 distinct workflow actions) and construct a log-driven simulated agent from a chronological 80/20 split of the same process. The main empirical finding is that a large workflow can appear well supported at the state level while retaining substantial blind mass over next-step decisions: refining the operational state to include case context, economic magnitude, and actor class expands the state space from 42 to 668 and raises state-action blind mass from 0.0165 at tau=50 to 0.1253 at tau=1000. On the held-out split, m(s) = max_a pi-hat(a|s) tracks realized autonomous step accuracy within 3.4 percentage points on average. The same quantities that delimit statistically credible autonomy also determine expected oversight burden. The framework is demonstrated on a large-scale enterprise procurement workflow and is designed for direct application to engineering processes for which operational event logs are available.