MMEmb-R1: Reasoning-Enhanced Multimodal Embedding with Pair-Aware Selection and Adaptive Control

2026-04-07Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceComputation and Language
AI summary

The authors looked at how large multimodal language models (MLLMs) can better reason when matching images and text. They found that forcing the model to always think step-by-step can be slow and sometimes confusing for simple tasks. To fix this, they made a system called MMEmb-R1 that decides when reasoning is actually helpful by testing if it improves alignment between image and text pairs. Their approach uses reinforcement learning to activate reasoning selectively, making the model faster and more accurate on a benchmark with fewer parameters.

multimodal language modelsembedding learningchain-of-thought reasoningcontrastive supervisionlatent variablecounterfactual interventionreinforcement learninginference latencyMMEB-V2 benchmark
Authors
Yuchi Wang, Haiyang Yu, Weikang Bian, Jiefeng Long, Xiao Liang, Chao Feng, Hongsheng Li
Abstract
MLLMs have been successfully applied to multimodal embedding tasks, yet their generative reasoning capabilities remain underutilized. Directly incorporating chain-of-thought reasoning into embedding learning introduces two fundamental challenges. First, structural misalignment between instance-level reasoning and pairwise contrastive supervision may lead to shortcut behavior, where the model merely learns the superficial format of reasoning. Second, reasoning is not universally beneficial for embedding tasks. Enforcing reasoning for all inputs may introduce unnecessary computation and latency, and can even obscure salient semantic signals for simple cases. To address these issues, we propose MMEmb-R1, an adaptive reasoning-based multimodal embedding framework. We formulate reasoning as a latent variable and introduce pair-aware reasoning selection that employs counterfactual intervention to identify reasoning paths beneficial for query-target alignment. Furthermore, we adopt reinforcement learning to selectively invoke reasoning only when necessary. Experiments on the MMEB-V2 benchmark demonstrate that our model achieves a score of 71.2 with only 4B parameters, establishing a new state-of-the-art while significantly reducing reasoning overhead and inference latency.