Gym-Anything: Turn any Software into an Agent Environment
2026-04-07 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors created Gym-Anything, a system that can turn any software into a test environment for training computer-use agents, which are programs designed to perform tasks on computers. They use two types of agents: one sets up the software and gathers data, and another checks to make sure everything is done correctly. Using this system, they built CUA-World, a large set of over 10,000 realistic and complex tasks from many different job fields, including very long tasks requiring 500+ steps. Their work aims to help develop smarter agents that can perform meaningful, economic-related computer tasks, and they provide their tools and data to the research community.
computer-use agentsinteractive environmentsmulti-agent systemslong-horizon tasksGym-AnythingCUA-Worldvision-language modeltask auditingeconomic value in AIbenchmark datasets
Authors
Pranjal Aggarwal, Graham Neubig, Sean Welleck
Abstract
Computer-use agents hold the promise of assisting in a wide range of digital economic activities. However, current research has largely focused on short-horizon tasks over a limited set of software with limited economic value, such as basic e-commerce and OS-configuration tasks. A key reason is that creating environments for complex software requires significant time and human effort, and therefore does not scale. To address this, we introduce Gym-Anything, a framework for converting any software into an interactive computer-use environment. We frame environment creation itself as a multi-agent task: a coding agent writes setup scripts, downloads real-world data, and configures the software, while producing evidence of correct setup. An independent audit agent then verifies evidence for the environment setup against a quality checklist. Using a taxonomy of economically valuable occupations grounded in U.S. GDP data, we apply this pipeline to 200 software applications with broad occupational coverage. The result is CUA-World, a collection of over 10K long-horizon tasks spanning domains from medical science and astronomy to engineering and enterprise systems, each configured with realistic data along with train and test splits. CUA-World also includes CUA-World-Long, a challenging long-horizon benchmark with tasks often requiring over 500 steps, far exceeding existing benchmarks. Distilling successful trajectories from the training split into a 2B vision-language model outperforms models 2$\times$ its size. We also apply the same auditing principle at test time: a separate VLM reviews completed trajectories and provides feedback on what remains, improving Gemini-3-Flash on CUA-World-Long from 11.5% to 14.0%. We release all code, infrastructure, and benchmark data to facilitate future research in realistic computer-use agents.