FinTradeBench: A Financial Reasoning Benchmark for LLMs

2026-03-19Computational Engineering, Finance, and Science

Computational Engineering, Finance, and ScienceArtificial IntelligenceComputation and LanguageInformation Retrieval
AI summary

The authors created FinTradeBench, a new test for AI models to see how well they can understand company financial data and stock market signals together. This benchmark has 1,400 questions based on 10 years of data from NASDAQ-100 companies, covering basics, trading signals, and questions combining both. They tested 14 large language models and found that while these models got better when using retrieval techniques for company data, they struggled with interpreting trading signal information. The results suggest current AI has trouble doing complex math and time-based reasoning needed for financial decisions. The authors hope this will encourage more work to improve AI in finance.

large language modelfinancial decision-makingNASDAQ-100company fundamentalstrading signalszero-shot promptingretrieval augmentationnumerical reasoningtime-series databenchmark evaluation
Authors
Yogesh Agrawal, Aniruddha Dutta, Md Mahadi Hasan, Santu Karmaker, Aritra Dutta
Abstract
Real-world financial decision-making is a challenging problem that requires reasoning over heterogeneous signals, including company fundamentals derived from regulatory filings and trading signals computed from price dynamics. Recently, with the advancement of Large Language Models (LLMs), financial analysts have begun to use them for financial decision-making tasks. However, existing financial question answering benchmarks for testing these models primarily focus on company balance sheet data and rarely evaluate reasoning over how company stocks trade in the market or their interactions with fundamentals. To take advantage of the strengths of both approaches, we introduce FinTradeBench, a benchmark for evaluating financial reasoning that integrates company fundamentals and trading signals. FinTradeBench contains 1,400 questions grounded in NASDAQ-100 companies over a ten-year historical window. The benchmark is organized into three reasoning categories: fundamentals-focused, trading-signal-focused, and hybrid questions requiring cross-signal reasoning. To ensure reliability at scale, we adopt a calibration-then-scaling framework that combines expert seed questions, multi-model response generation, intra-model self-filtering, numerical auditing, and human-LLM judge alignment. We evaluate 14 LLMs under zero-shot prompting and retrieval-augmented settings and witness a clear performance gap. Retrieval substantially improves reasoning over textual fundamentals, but provides limited benefit for trading-signal reasoning. These findings highlight fundamental challenges in the numerical and time-series reasoning for current LLMs and motivate future research in financial intelligence.