ToolCUA: Towards Optimal GUI-Tool Path Orchestration for Computer Use Agents
2026-05-12 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors address the challenge faced by computer agents that interact with software using both basic actions like clicking and advanced tools like APIs, which causes confusion about when to switch between these methods. To solve this, they created ToolCUA, an agent trained through a multi-step process that generates mixed action data and uses reinforcement learning to improve decision-making. Their experiments show that ToolCUA performs much better than previous approaches and works well with combined GUI and tool actions. This suggests that training agents to flexibly use both action types can improve their efficiency in real software environments.
Computer Use AgentsGUI actionsAPI toolsReinforcement LearningTrajectory synthesisSupervised Fine-TuningHybrid action spacePath selectionOnline agentic RLOSWorld-MCP benchmark
Authors
Xuhao Hu, Xi Zhang, Haiyang Xu, Kyle Qiao, Jingyi Yang, Xuanjing Huang, Jing Shao, Ming Yan, Jieping Ye
Abstract
Computer Use Agents (CUAs) can act through both atomic GUI actions, such as click and type, and high-level tool calls, such as API-based file operations, but this hybrid action space often leaves them uncertain about when to continue with GUI actions or switch to tools, leading to suboptimal execution paths. This difficulty stems from the scarcity of high-quality interleaved GUI-Tool trajectories, the cost and brittleness of collecting real tool trajectories, and the lack of trajectory-level supervision for GUI-Tool path selection. In this paper, we propose ToolCUA, an end-to-end agent designed to learn optimal GUI-Tool path selection through a staged training paradigm. We first introduce an Interleaved GUI-Tool Trajectory Scaling Pipeline that repurposes abundant static GUI trajectories and synthesizes a grounded tool library, enabling diverse GUI-Tool trajectories without manual engineering or real tool-trajectory collection. We then perform Tool-Bootstrapped GUI RFT, combining warmup SFT with single-turn RL to improve decisions at critical GUI-Tool switching points. Finally, we optimize ToolCUA with Online Agentic RL in a high-fidelity GUI-Tool environment, guided by a Tool-Efficient Path Reward that encourages appropriate tool use and shorter execution paths. Experiments on OSWorld-MCP show that ToolCUA achieves 46.85% accuracy, a relative improvement of approximately 66% over the baseline, establishing a new state of the art among models of comparable scale. It also improves by 3.9% over GUI-only settings, demonstrating effective GUI-Tool orchestration. The results further suggest that training in a hybrid action space is a promising paradigm for real-world digital agents. Open-sourced here: https://x-plug.github.io/ToolCUA/