ASTRA-QA: A Benchmark for Abstract Question Answering over Documents

2026-05-11Computation and Language

Computation and LanguageInformation Retrieval
AI summary

The authors created ASTRA-QA, a new benchmark to test how well question-answering systems can handle abstract questions that need using information from long or multiple documents. They included detailed answer annotations that let them check if answers cover the main points and avoid wrong information, without needing hard-to-do comparisons. They tested several retrieval-based models and showed ASTRA-QA helps measure coverage, mistakes, and handling different information scopes. This work aims to improve how we evaluate complex question-answering tasks.

Document-based Question AnsweringAbstract QuestionsBenchmarkEvaluation MetricsInformation RetrievalRetrieval-Augmented GenerationTopic CoverageHallucinationMulti-document QA
Authors
Shu Wang, Shansong Zhou, Xinyang Wang, Shiwei Wang, Hulong Wu, Yixiang Fang
Abstract
Document-based question answering (QA) increasingly includes abstract questions that require synthesizing scattered information from long documents or across multiple documents into coherent answers. However, this setting is still poorly supported by existing benchmarks and evaluation methods, which often lack stable abstract references or rely on coarse similarity metrics and unstable head-to-head comparisons. To alleviate this issue, we introduce ASTRA-QA, a benchmark for AbSTRAct Question Answering over documents. ASTRA-QA contains 869 QA instances over academic papers and news documents, covering five abstract question types and three controlled retrieval scopes. Each instance is equipped with explicit evaluation annotations, including answer topic sets, curated unsupported topics, and aligned evidence. Building on these annotations, ASTRA-QA assesses whether answers cover required key points and avoid unsupported content by directly scoring topic coverage and curated unsupported content, enabling scalable evaluation without exhaustive head-to-head comparisons. Experiments with representative Retrieval-Augmented Generation (RAG) methods spanning vanilla, graph-based, and hierarchical retrieval settings show that ASTRA-QA provides reference-grounded diagnostics for coverage, hallucination, and retrieval-scope robustness. Our dataset and code are available at https://xinyangsally.github.io/astra-benchmark.