HAAS: A Policy-Aware Framework for Adaptive Task Allocation Between Humans and Artificial Intelligence Systems

2026-05-04Artificial Intelligence

Artificial IntelligenceHuman-Computer InteractionSoftware Engineering
AI summary

The authors explore how to best share work between humans and AI, arguing it’s not just an either-or choice but a mix depending on many factors. They introduce HAAS, a system combining fixed rules and machine learning to decide who does what in software and manufacturing tasks. Their tests show that adjusting rules around AI control influences outcomes, sometimes making work easier and more efficient. They also find no one-size-fits-all setting, but moderate control works better as the system learns. Overall, HAAS helps organizations figure out the best way to allocate tasks before fully implementing them.

Human-AI collaborationTask allocationAdaptive systemsRule-based systemsContextual banditsGovernance in AICognitive dimensionsAutonomy spectrumWorkload managementOperational performance
Authors
Vicente Pelechanoa, Antoni Mestre, Manoli Albert, Miriam Gil
Abstract
Deciding how to distribute work between humans and AI systems is a central challenge in organisational design. Most approaches treat this as a binary choice, yet the operational reality is richer: humans and AI routinely share tasks or take complementary roles depending on context, fatigue, and the stakes involved. Governing that distribution -- balancing efficiency, oversight, and human capability -- remains an open problem. This paper presents Human-AI Adaptive Symbiosis (HAAS), an implemented framework for adaptive task allocation in software engineering and manufacturing. HAAS combines two coupled components: a rule-based expert system that enforces governance constraints before any learning occurs, and a contextual-bandit learner that selects among feasible collaboration modes from outcome feedback. Task-agent fit is represented through five auditable cognitive dimensions and a five-mode autonomy spectrum -- from human-only to fully autonomous -- embedded in a reproducible benchmark spanning both domains. Three empirical findings emerge. First, governance is not a binary switch but a tunable design variable: tighter constraints predictably convert autonomous AI assignments into supervised collaborations, with domain-specific costs and benefits. Second, in manufacturing, stronger governance can improve operational performance and reduce fatigue simultaneously -- a workload-buffering effect that contradicts the usual framing of governance as pure overhead. Third, no single governance setting dominates across all contexts; moderate governance becomes increasingly competitive as the learner accumulates experience within the governed action space. Together, these findings position HAAS as a pre-deployment workbench for comparing and inspecting human--AI allocation policies before organisational commitment.