Useful for Exploration, Risky for Precision: Evaluating AI Tools in Academic Research

2026-05-11Artificial Intelligence

Artificial IntelligenceHuman-Computer Interaction
AI summary

The authors looked at how AI tools help researchers by answering questions and reviewing scientific papers. They found these tools give useful summaries but often make mistakes and don’t clearly show how they got their answers. This means researchers still need to carefully check the AI’s work. The authors also noted that current tests don’t fully consider how easy or clear the tools are for real people to use. They suggest better evaluations focused on both human experience and technical accuracy to improve these AI tools in research.

artificial intelligencequestion answeringliterature reviewbenchmarkingexplainabilitytransparencyusabilityresearch workflowssystematic reviewshuman-centered evaluation
Authors
Anthea Dathe, Kiran Hoffmann, Aline Mangold
Abstract
Artificial intelligence (AI) tools are being incorporated into scientific research workflows with the potential to enhance efficiency in tasks such as document analysis, question answering (Q and A), and literature search. However, system outputs are often difficult to verify, lack transparency in their generation and remain prone to errors. Suitable benchmarks are needed to document and evaluate arising issues. Nevertheless, existing benchmarking approaches are not adequately capturing human-centered criteria such as usability, interpretability, and integration into research workflows. To address this gap, the present work proposes and applies a benchmarking framework combining human-centered and computer-centered metrics to evaluate AI-based Q&A and literature review tools for research use. The findings suggest that Q and A tools can offer valuable overviews and generally accurate summaries; however, they are not always reliable for precise information extraction. Explainable AI (xAI) accuracy was particularly low, meaning highlighted source passages frequently failed to correspond to generated answers. This shifted the burden of validation back onto the researcher. Literature review tools supported exploratory searches but showed low reproducibility, limited transparency regarding chosen sources and databases, and inconsistent source quality, making them unsuitable for systematic reviews. A comparison of these tool groups reveals a similar pattern: while AI tools can enhance efficiency in the early stages of the research workflow and shallow tasks, their outputs still require human verification. The findings underscore the importance of explainability features to enhance transparency, verification efficiency and careful integration of AI tools into researchers' workflows. Further, human-centered evaluation remains an important concern to ensure practical applicability.