Evaluating Financial Intelligence in Large Language Models: Benchmarking SuperInvesting AI with LLM Engines
2026-03-09 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors created a test called AFIB to see how well different AI models can handle financial analysis tasks. They looked at five aspects: accuracy, thoroughness, up-to-date data, consistency, and common mistakes. They tested five models and found that SuperInvesting was the most accurate and complete with the fewest errors. Models with live data access, like Perplexity, did well on recent information but struggled with deep analysis. Their work shows that good financial AI needs both solid data access and strong reasoning skills.
Large Language ModelsFinancial AnalysisEquity ResearchFactual AccuracyAnalytical CompletenessData RecencyModel ConsistencyHallucination RateRetrieval-oriented SystemsInvestment Research
Authors
Akshay Gulati, Kanha Singhania, Tushar Banga, Parth Arora, Anshul Verma, Vaibhav Kumar Singh, Agyapal Digra, Jayant Singh Bisht, Danish Sharma, Varun Singla, Shubh Garg
Abstract
Large language models are increasingly used for financial analysis and investment research, yet systematic evaluation of their financial reasoning capabilities remains limited. In this work, we introduce the AI Financial Intelligence Benchmark (AFIB), a multi-dimensional evaluation framework designed to assess financial analysis capabilities across five dimensions: factual accuracy, analytical completeness, data recency, model consistency, and failure patterns. We evaluate five AI systems: GPT, Gemini, Perplexity, Claude, and SuperInvesting, using a dataset of 95+ structured financial analysis questions derived from real-world equity research tasks. The results reveal substantial differences in performance across models. Within this benchmark setting, SuperInvesting achieves the highest aggregate performance, with an average factual accuracy score of 8.96/10 and the highest completeness score of 56.65/70, while also demonstrating the lowest hallucination rate among evaluated systems. Retrieval-oriented systems such as Perplexity perform strongly on data recency tasks due to live information access but exhibit weaker analytical synthesis and consistency. Overall, the results highlight that financial intelligence in large language models is inherently multi-dimensional, and systems that combine structured financial data access with analytical reasoning capabilities provide the most reliable performance for complex investment research workflows.