$\texttt{YC-Bench}$: Benchmarking AI Agents for Long-Term Planning and Consistent Execution

2026-04-01Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors created YC-Bench, a test for language models that simulates running a startup for a year to see how well they plan and adapt over long periods. They tested 12 different models and found only three could consistently make a profit, with Claude Opus 4.6 doing the best. They noticed that keeping track of information (scratchpads) helped models succeed, but many failed due to not handling tricky clients well. Their results show that even top models have trouble managing long-term tasks without specific improvements. The benchmark is open for others to use and improve upon.

LLM agentslong-horizon planningstrategic coherencescratchpadpartial observabilityadversarial clientsstartup simulationinference costbenchmarkmodel evaluation
Authors
Muyu He, Adit Jain, Anand Kumar, Vincent Tu, Soumyadeep Bakshi, Sachin Patro, Nazneen Rajani
Abstract
As LLM agents tackle increasingly complex tasks, a critical question is whether they can maintain strategic coherence over long horizons: planning under uncertainty, learning from delayed feedback, and adapting when early mistakes compound. We introduce $\texttt{YC-Bench}$, a benchmark that evaluates these capabilities by tasking an agent with running a simulated startup over a one-year horizon spanning hundreds of turns. The agent must manage employees, select task contracts, and maintain profitability in a partially observable environment where adversarial clients and growing payroll create compounding consequences for poor decisions. We evaluate 12 models, both proprietary and open source, across 3 seeds each. Only three models consistently surpass the starting capital of \$200K, with Claude Opus 4.6 achieving the highest average final funds at \$1.27 M, followed by GLM-5 at \$1.21 M at 11$\times$ lower inference cost. Scratchpad usage, the sole mechanism for persisting information across context truncation, is the strongest predictor of success, and adversarial client detection is the primary failure mode, accounting for $47\%$ of bankruptcies. Our analysis reveals that frontier models still fail through distinct failure modes such as over-parallelization, demonstrating the capability gaps for long-horizon performance. $\texttt{YC-Bench}$ is open-source, reproducible, and configurable.