Beyond Rows to Reasoning: Agentic Retrieval for Multimodal Spreadsheet Understanding and Editing
2026-03-06 • Computation and Language
Computation and Language
AI summaryⓘ
The authors present BRTR, a new method to help large language models better understand complex Excel spreadsheets with many cells, links, and visuals. Unlike previous techniques that look at the spreadsheet only once or reduce details, BRTR uses a step-by-step process to retrieve and analyze information, allowing more accurate and detailed reasoning. Evaluated through extensive expert testing, BRTR outperforms past models on several spreadsheet tasks and keeps track of its reasoning steps for transparency. The authors also identify the best embedding model and language model for this purpose.
Retrieval-Augmented GenerationLarge Language ModelsSpreadsheet UnderstandingMultimodal EmbeddingsIterative ReasoningExcel WorkflowsTool-Calling LoopAuditabilityBenchmark Evaluation
Authors
Anmol Gulati, Sahil Sen, Waqar Sarguroh, Kevin Paul
Abstract
Recent advances in multimodal Retrieval-Augmented Generation (RAG) enable Large Language Models (LLMs) to analyze enterprise spreadsheet workbooks containing millions of cells, cross-sheet dependencies, and embedded visual artifacts. However, state-of-the-art approaches exclude critical context through single-pass retrieval, lose data resolution through compression, and exceed LLM context windows through naive full-context injection, preventing reliable multi-step reasoning over complex enterprise workbooks. We introduce Beyond Rows to Reasoning (BRTR), a multimodal agentic framework for spreadsheet understanding that replaces single-pass retrieval with an iterative tool-calling loop, supporting end-to-end Excel workflows from complex analysis to structured editing. Supported by over 200 hours of expert human evaluation, BRTR achieves state-of-the-art performance across three frontier spreadsheet understanding benchmarks, surpassing prior methods by 25 percentage points on FRTR-Bench, 7 points on SpreadsheetLLM, and 32 points on FINCH. We evaluate five multimodal embedding models, identifying NVIDIA NeMo Retriever 1B as the top performer for mixed tabular and visual data, and vary nine LLMs. Ablation experiments confirm that the planner, retrieval, and iterative reasoning each contribute substantially, and cost analysis shows GPT-5.2 achieves the best efficiency-accuracy trade-off. Throughout all evaluations, BRTR maintains full auditability through explicit tool-call traces.