A-MAR: Agent-based Multimodal Art Retrieval for Fine-Grained Artwork Understanding
2026-04-21 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors present A-MAR, a new system that helps explain artworks by breaking down questions into step-by-step plans and searching for evidence based on those plans. Unlike other models that guess answers using hidden knowledge, A-MAR explicitly shows the reasoning steps and the evidence it finds. They also created a test called ArtCoT-QA to see how well systems handle multi-step reasoning about art. Their experiments show that A-MAR gives clearer, better explanations than other approaches by carefully guiding the search for information.
multimodal large language modelsretrieval systemsstructured reasoningevidence groundingartwork explanationmulti-step reasoningbenchmark datasetsinterpretabilitycultural contextAI in art
Authors
Shuai Wang, Hongyi Zhu, Jia-Hong Huang, Yixian Shen, Chengxi Zeng, Stevan Rudinac, Monika Kackovic, Nachoem Wijnberg, Marcel Worring
Abstract
Understanding artworks requires multi-step reasoning over visual content and cultural, historical, and stylistic context. While recent multimodal large language models show promise in artwork explanation, they rely on implicit reasoning and internalized knowl- edge, limiting interpretability and explicit evidence grounding. We propose A-MAR, an Agent-based Multimodal Art Retrieval framework that explicitly conditions retrieval on structured reasoning plans. Given an artwork and a user query, A-MAR first decomposes the task into a structured reasoning plan that specifies the goals and evidence requirements for each step. Retrieval is then conditionedon this plan, enabling targeted evidence selection and supporting step-wise, grounded explanations. To evaluate agent-based multi- modal reasoning within the art domain, we introduce ArtCoT-QA. This diagnostic benchmark features multi-step reasoning chains for diverse art-related queries, enabling a granular analysis that extends beyond simple final answer accuracy. Experiments on SemArt and Artpedia show that A-MAR consistently outperforms static, non planned retrieval and strong MLLM baselines in final explanation quality, while evaluations on ArtCoT-QA further demonstrate its advantages in evidence grounding and multi-step reasoning ability. These results highlight the importance of reasoning-conditioned retrieval for knowledge-intensive multimodal understanding and position A-MAR as a step toward interpretable, goal-driven AI systems, with particular relevance to cultural industries. The code and data are available at: https://github.com/ShuaiWang97/A-MAR.