FLARE: Full-Modality Long-Video Audiovisual Retrieval Benchmark with User-Simulated Queries
2026-05-11 • Multimedia
Multimedia
AI summaryⓘ
The authors created FLARE, a new test set for finding specific parts of long videos using both sound and visuals together, based on real user questions. Unlike older tests that use short clips or just one type of information, FLARE uses videos up to an hour long and requires combining video and audio to find the right segments. They tested many existing search models and found that doing well on simple caption searches doesn't always mean the model can handle real user queries, especially when working with both sound and visuals. The authors also highlight that matching audio with language remains a tough challenge for these systems.
video retrievalmultimodal large language modelsaudiovisual datalong-form videocross-modal retrievalcaption-based evaluationquery-based retrievalaudio-language alignmentbenchmark dataset
Authors
Qijie You, Hao Liang, Mingrui Chen, Bohan Zeng, Meiyi Qiang, Zhenhao Wong, Wentao Zhang
Abstract
As video becomes increasingly central to information dissemination and multimodal large language models (MLLMs) continue to advance, evaluating video retrieval has become increasingly important. In realistic search scenarios, this requires matching short user queries to long-form content using both visual and auditory evidence. Yet existing retrieval benchmarks are still dominated by short clips, single modalities, and caption-based evaluation. We introduce FLARE, a full-modality long-video audiovisual retrieval benchmark with user-simulated queries. Built from 399 carefully screened Video-MME videos (10--60 min, 225.4 h) to ensure source quality and diversity, FLARE contains 87,697 clips annotated with vision, audio, and unified audiovisual captions, together with 274,933 user-style queries. Cross-modal queries are further filtered by a hard bimodal constraint, requiring retrieval to fail under either modality alone but succeed when both are combined. FLARE evaluates models under two regimes, caption-based and query-based retrieval, across vision, audio, and unified audiovisual settings. Experiments with 15 representative retrievers show that user-style queries substantially change model behavior, strong caption-based performance does not always transfer to query-based retrieval, and audio--language alignment remains a key bottleneck for unified audiovisual retrieval. Our code and data are released at https://flarebench.github.io/