Wan-Weaver: Interleaved Multi-modal Generation via Decoupled Training

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors address the problem that current AI models can understand multiple types of inputs but usually produce outputs in only one form, like text or images. They create a system called Wan-Weaver that splits the task into making detailed text plans and generating matching images, helping the model create mixed text-and-image content. By training it on large amounts of text-based proxy data and image references, their method achieves good long-range coherence and visual consistency without needing real mixed data. They also build a benchmark to test the model, showing it performs better than previous approaches.

multi-modal inputsinterleaved generationtextual planningvisual consistencyplannervisualizerlong-range contextproxy dataimage synthesisbenchmark
Authors
Jinbo Xing, Zeyinzi Jiang, Yuxiang Tuo, Chaojie Mao, Xiaotang Gai, Xi Chen, Jingfeng Zhang, Yulin Pan, Zhen Han, Jie Xiao, Keyu Yan, Chenwei Xie, Chongyang Zhong, Kai Zhu, Tong Shen, Lianghua Huang, Yu Liu, Yujiu Yang
Abstract
Recent unified models have made unprecedented progress in both understanding and generation. However, while most of them accept multi-modal inputs, they typically produce only single-modality outputs. This challenge of producing interleaved content is mainly due to training data scarcity and the difficulty of modeling long-range cross-modal context. To address this issue, we decompose interleaved generation into textual planning and visual consistency modeling, and introduce a framework consisting of a planner and a visualizer. The planner produces dense textual descriptions for visual content, while the visualizer synthesizes images accordingly. Under this guidance, we construct large-scale textual-proxy interleaved data (where visual content is represented in text) to train the planner, and curate reference-guided image data to train the visualizer. These designs give rise to Wan-Weaver, which exhibits emergent interleaved generation ability with long-range textual coherence and visual consistency. Meanwhile, the integration of diverse understanding and generation data into planner training enables Wan-Weaver to achieve robust task reasoning and generation proficiency. To assess the model's capability in interleaved generation, we further construct a benchmark that spans a wide range of use cases across multiple dimensions. Extensive experiments demonstrate that, even without access to any real interleaved data, Wan-Weaver achieves superior performance over existing methods.