EchoGen: Cycle-Consistent Learning for Unified Layout-Image Generation and Understanding

2026-03-18Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors developed EchoGen, a system that can create images based on layouts and text descriptions while also identifying where things are in those images. They found that training the system to do both tasks together helps each task perform better, but it is challenging to optimize. To solve this, they used a three-step training process to gradually improve the model's abilities. Their experiments show EchoGen achieves top performance and benefits from combining both tasks in one model.

layout-to-image generationimage groundingmulti-task learningprogressive trainingreinforcement learningGRPO strategyspatial relationshipsvisual supervision
Authors
Kai Zou, Hongbo Liu, Dian Zheng, Jianxiong Gao, Zhiwei Zhao, Bin Liu
Abstract
In this work, we present EchoGen, a unified framework for layout-to-image generation and image grounding, capable of generating images with accurate layouts and high fidelity to text descriptions (e.g., spatial relationships), while grounding the image robustly at the same time. We believe that image grounding possesses strong text and layout understanding abilities, which can compensate for the corresponding limitations in layout-to-image generation. At the same time, images generated from layouts exhibit high diversity in content, thereby enhancing the robustness of image grounding. Jointly training both tasks within a unified model can promote performance improvements for each. However, we identify that this joint training paradigm encounters several optimization challenges and results in restricted performance. To address these issues, we propose progressive training strategies. First, the Parallel Multi-Task Pre-training (PMTP) stage equips the model with basic abilities for both tasks, leveraging shared tokens to accelerate training. Next, the Dual Joint Optimization (DJO) stage exploits task duality to sequentially integrate the two tasks, enabling unified optimization. Finally, the Cycle RL stage eliminates reliance on visual supervision by using consistency constraints as rewards, significantly enhancing the model's unified capabilities via the GRPO strategy. Extensive experiments demonstrate state-of-the-art results on both layout-to-image generation and image grounding benchmarks, and reveal clear synergistic gains from optimizing the two tasks together.