Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance

2026-03-02Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors address the challenge of precisely editing videos based on instructions, which is hard because language can’t fully describe visual details. They create a new way to make lots of training data by using image generators to produce helpful reference images paired with editing tasks. They build a large dataset called RefVIE and a benchmark to test video editing models. Their new model, Kiwi-Edit, combines visual and textual information to follow both instructions and reference images better, showing improved results in video editing. They have also shared their datasets and code publicly.

instruction-based video editingreference-guided editingimage generative modelstraining data generationRefVIE datasetvideo editing benchmarkssemantic guidancemulti-stage traininglatent visual featureslearnable queries
Authors
Yiqi Lin, Guoqiang Liang, Ziyun Zeng, Zechen Bai, Yanzhe Chen, Mike Zheng Shou
Abstract
Instruction-based video editing has witnessed rapid progress, yet current methods often struggle with precise visual control, as natural language is inherently limited in describing complex visual nuances. Although reference-guided editing offers a robust solution, its potential is currently bottlenecked by the scarcity of high-quality paired training data. To bridge this gap, we introduce a scalable data generation pipeline that transforms existing video editing pairs into high-fidelity training quadruplets, leveraging image generative models to create synthesized reference scaffolds. Using this pipeline, we construct RefVIE, a large-scale dataset tailored for instruction-reference-following tasks, and establish RefVIE-Bench for comprehensive evaluation. Furthermore, we propose a unified editing architecture, Kiwi-Edit, that synergizes learnable queries and latent visual features for reference semantic guidance. Our model achieves significant gains in instruction following and reference fidelity via a progressive multi-stage training curriculum. Extensive experiments demonstrate that our data and architecture establish a new state-of-the-art in controllable video editing. All datasets, models, and code is released at https://github.com/showlab/Kiwi-Edit.