SCOPE: Structured Decomposition and Conditional Skill Orchestration for Complex Image Generation

2026-05-08Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors address the challenge of making text-to-image models better at following detailed instructions by focusing on 'semantic commitments,' which are the specific requirements the image must meet. They identify a problem called the Conceptual Rift, where these commitments get lost or mixed up during different stages of image creation. To fix this, they created SCOPE, a system that keeps track of these commitments carefully and uses various tools to check and fix any issues. They tested SCOPE on a new benchmark called Gen-Arena and showed it works better than other methods at producing images that meet complex instructions.

text-to-image modelssemantic commitmentsConceptual RiftSCOPEspecification-guided frameworkretrievalreasoningrepairGen-ArenaEntity-Gated Intent Pass Rate (EGIP)
Authors
Tianfei Ren, Zhipeng Yan, Yiming Zhao, Zhen Fang, Yu Zeng, Guohui Zhang, Hang Xu, Xiaoxiao Ma, Shiting Huang, Ke Xu, Wenxuan Huang, Lionel Z. Wang, Lin Chen, Zehui Chen, Jie Huang, Feng Zhao
Abstract
While text-to-image models have made strong progress in visual fidelity, faithfully realizing complex visual intents remains challenging because many requirements must be tracked across grounding, generation, and verification. We refer to these requirements as semantic commitments and formalize their lifecycle discontinuity as the Conceptual Rift, where commitments may be locally resolved or checked but fail to remain identifiable as the same operational units throughout the generation lifecycle. To address this, we propose SCOPE, a specification-guided skill orchestration framework that maintains semantic commitments in an evolving structured specification and conditionally invokes retrieval, reasoning, and repair skills around unresolved or violated commitments. To evaluate commitment-level intent realization, we introduce Gen-Arena, a human-annotated benchmark with entity- and constraint-level specifications, together with Entity-Gated Intent Pass Rate (EGIP), a strict entity-first pass criterion. SCOPE substantially outperforms all evaluated baselines on Gen-Arena, achieving 0.60 EGIP, and further achieves strong results on WISE-V (0.907) and MindBench (0.61), demonstrating the effectiveness of persistent commitment tracking for complex image generation.