ExtractBench: A Benchmark and Evaluation Methodology for Complex Structured Extraction
2026-02-12 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors present ExtractBench, a new test and evaluation tool to check how well language models can pull structured data from complex PDF documents. They highlight two main problems: there wasn't a comprehensive test covering many types of fields, and existing methods didn't properly measure different kinds of correctness needed for nested data. Their benchmark includes 35 real PDFs with detailed schemas and human-verified answers, showing that even advanced models struggle with large and complex data extractions. The authors provide ExtractBench openly to help improve this important task.
unstructured documentsPDF-to-JSON extractionlarge language modelsnested extractionstructured dataevaluation benchmarkschemasemantic equivalencefield alignmenthallucination
Authors
Nick Ferguson, Josh Pennington, Narek Beghian, Aravind Mohan, Douwe Kiela, Sheshansh Agrawal, Thien Hang Nguyen
Abstract
Unstructured documents like PDFs contain valuable structured information, but downstream systems require this data in reliable, standardized formats. LLMs are increasingly deployed to automate this extraction, making accuracy and reliability paramount. However, progress is bottlenecked by two gaps. First, no end-to-end benchmark evaluates PDF-to-JSON extraction under enterprise-scale schema breadth. Second, no principled methodology captures the semantics of nested extraction, where fields demand different notions of correctness (exact match for identifiers, tolerance for quantities, semantic equivalence for names), arrays require alignment, and omission must be distinguished from hallucination. We address both gaps with ExtractBench, an open-source benchmark and evaluation framework for PDF-to-JSON structured extraction. The benchmark pairs 35 PDF documents with JSON Schemas and human-annotated gold labels across economically valuable domains, yielding 12,867 evaluatable fields spanning schema complexities from tens to hundreds of fields. The evaluation framework treats the schema as an executable specification: each field declares its scoring metric. Baseline evaluations reveal that frontier models (GPT-5/5.2, Gemini-3 Flash/Pro, Claude 4.5 Opus/Sonnet) remain unreliable on realistic schemas. Performance degrades sharply with schema breadth, culminating in 0% valid output on a 369-field financial reporting schema across all tested models. We release ExtractBench at https://github.com/ContextualAI/extract-bench.