ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors introduce ShotStream, a new system for making multi-part videos quickly and interactively. Instead of creating entire videos at once, ShotStream generates each video segment based on what came before, letting users guide the story as it unfolds. They use clever memory storage methods to keep the video looking consistent and a special training strategy to reduce mistakes over time. Their approach produces smooth, coherent videos very fast—about 16 frames per second on a single GPU—matching or beating slower existing methods.

multi-shot video generationcausal architecturetext-to-video modelautoregressive generationDistribution Matching Distillationdual-cache memoryRoPE discontinuity indicatorself-forcinginteractive storytellingframe latency
Authors
Yawen Luo, Xiaoyu Shi, Junhao Zhuang, Yutian Chen, Quande Liu, Xintao Wang, Pengfei Wan, Tianfan Xue
Abstract
Multi-shot video generation is crucial for long narrative storytelling, yet current bidirectional architectures suffer from limited interactivity and high latency. We propose ShotStream, a novel causal multi-shot architecture that enables interactive storytelling and efficient on-the-fly frame generation. By reformulating the task as next-shot generation conditioned on historical context, ShotStream allows users to dynamically instruct ongoing narratives via streaming prompts. We achieve this by first fine-tuning a text-to-video model into a bidirectional next-shot generator, which is then distilled into a causal student via Distribution Matching Distillation. To overcome the challenges of inter-shot consistency and error accumulation inherent in autoregressive generation, we introduce two key innovations. First, a dual-cache memory mechanism preserves visual coherence: a global context cache retains conditional frames for inter-shot consistency, while a local context cache holds generated frames within the current shot for intra-shot consistency. And a RoPE discontinuity indicator is employed to explicitly distinguish the two caches to eliminate ambiguity. Second, to mitigate error accumulation, we propose a two-stage distillation strategy. This begins with intra-shot self-forcing conditioned on ground-truth historical shots and progressively extends to inter-shot self-forcing using self-generated histories, effectively bridging the train-test gap. Extensive experiments demonstrate that ShotStream generates coherent multi-shot videos with sub-second latency, achieving 16 FPS on a single GPU. It matches or exceeds the quality of slower bidirectional models, paving the way for real-time interactive storytelling. Training and inference code, as well as the models, are available on our