PlayCoder: Making LLM-Generated GUI Code Playable

2026-04-21Software Engineering

Software Engineering
AI summary

The authors studied how well large language models (LLMs) can create interactive GUI applications like games, which need more than just passing tests because they involve complex user interactions. They created PlayEval, a new benchmark with real applications, and introduced a new score called Play@k that checks if generated apps can run correctly through full interactions. They also built PlayTester, an AI that plays these apps to find logical mistakes automatically. Testing top LLMs showed they often fail to generate fully working GUI code. To fix this, the authors developed PlayCoder, a system that iteratively improves the code, leading to better results and the ability to detect hidden bugs traditional tests miss.

Large Language ModelsGUI ApplicationsCode GenerationBenchmarkingPlayEvalPlay@k MetricPlayTesterInteractive TestingIterative RepairSemantic Alignment
Authors
Zhiyuan Peng, Wei Tao, Xin Yin, Chenhao Ying, Yuan Luo, Yiwen Guo
Abstract
Large language models (LLMs) have achieved strong results in code generation, but their ability to generate GUI applications, especially games, remains insufficiently studied. Existing benchmarks mainly evaluate correctness through test cases, which are inadequate for GUI applications because these systems are interactive, event-driven, and require correct state transitions across sequences of user actions. Their evaluation therefore should consider interaction flows and UI logic rather than only pass/fail outcomes. To study this problem, we introduce PlayEval, a repository-aware benchmark built from 43 multilingual GUI applications in Python, TypeScript, and JavaScript. Unlike prior GUI benchmarks that are difficult to adapt to desktop environments, PlayEval covers six major GUI application categories and directly supports code-generation evaluation. We further propose Play@k, a metric that measures whether at least one of *k* generated candidates can be played end-to-end without logical errors. To support reliable evaluation, we develop PlayTester, an LLM-based agent that performs task-oriented GUI playthroughs and detects logic violations automatically. Experiments on 10 state-of-the-art code LLMs show that, despite high compilation rates, they achieve near-zero Play@3, revealing major weaknesses in generating logically correct GUI applications. To address this limitation, we present PlayCoder, a multi-agent, repository-aware framework that generates, evaluates, and iteratively repairs GUI application code in a closed loop. PlayCoder substantially improves both functional correctness and semantic alignment for open-source and closed-source models, reaching up to 38.1% Exec@3 and 20.3% Play@3. Case studies further show that it can uncover silent logic bugs missed by traditional metrics and fix them through targeted edits.