ReCap: Lightweight Referential Grounding for Coherent Story Visualization

2026-04-20Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors developed ReCap, a new method to make sequences of images better match stories by keeping characters looking consistent across pictures. Instead of using big memory banks or complex language models, ReCap uses a small module that pays special attention to pronouns to keep track of who is who. They also added a training trick called SemDrift to stop characters from changing appearance when the story text is unclear. Their approach improves character consistency on popular story visualization tests and even works for realistic human stories, not just cartoons.

Story VisualizationDiffusion ModelsCharacter ConsistencyPronoun ResolutionSemantic DriftDINOv3Visual EmbeddingsCoherenceImage Sequence GenerationConditional Frame Referencing
Authors
Aditya Arora, Akshita Gupta, Pau Rodriguez, Marcus Rohrbach
Abstract
Story Visualization aims to generate a sequence of images that faithfully depicts a textual narrative that preserve character identity, spatial configuration, and stylistic coherence as the narratives unfold. Maintaining such cross-frame consistency has traditionally relied on explicit memory banks, architectural expansion, or auxiliary language models, resulting in substantial parameter growth and inference overhead. We introduce ReCap, a lightweight consistency framework that improves character stability and visual fidelity without modifying the base diffusion backbone. ReCap's CORE (COnditional frame REferencing) module treats anaphors, in our case pronouns, as visual anchors, activating only when characters are referred to by a pronoun and conditioning on the preceding frame to propagate visual identity. This selective design avoids unconditional cross-frame conditioning and introduces only 149K additional parameters, a fraction of the cost of memory-bank and LLM-augmented approaches. To further stabilize identity, we incorporate SemDrift (Guided Semantic Drift Correction) applied only during training. When text is vague or referential, the denoiser lacks a visual anchor for identity-defining attributes, causing character appearance to drift across frames, SemDrift corrects this by aligning denoiser representations with pretrained DINOv3 visual embeddings, enforcing semantic identity stability at zero inference cost. ReCap outperforms previous state-of-the-art, StoryGPT-V, on the two main benchmarks for story visualization by 2.63% Character-Accuracy on FlintstonesSV and by 5.65% on PororoSV, establishing a new state-of-the-art character consistency on both benchmarks. Furthermore, we extend story visualization to human-centric narratives derived from real films, demonstrating the capability of ReCap beyond stylized cartoon domains.