Developing the PsyCogMetrics AI Lab to Evaluate Large Language Models and Advance Cognitive Science -- A Three-Cycle Action Design Science Study

2026-03-13Artificial Intelligence

Artificial Intelligence
AI summary

The authors developed PsyCogMetrics AI Lab, a cloud platform that helps test and understand how well large language models (LLMs) work using psychology and cognitive science methods. They followed a structured three-step research process: first identifying problems with current tests, then applying established theories to set goals, and finally building and improving the platform through repeated testing. Their work offers a new tool to better assess LLMs, supporting research where AI meets psychology and social sciences.

Large Language Model (LLM)PsychometricsCognitive ScienceAction Design SciencePopperian FalsifiabilityClassical Test TheoryCognitive Load TheoryBuild-Intervene-Evaluate LoopIT ArtifactCloud Computing
Authors
Zhiye Jin, Yibai Li, K. D. Joshi, Xuefei, Deng, Xiaobing, Li
Abstract
This study presents the development of the PsyCogMetrics AI Lab (psycogmetrics.ai), an integrated, cloud-based platform that operationalizes psychometric and cognitive-science methodologies for Large Language Model (LLM) evaluation. Framed as a three-cycle Action Design Science study, the Relevance Cycle identifies key limitations in current evaluation methods and unfulfilled stakeholder needs. The Rigor Cycle draws on kernel theories such as Popperian falsifiability, Classical Test Theory, and Cognitive Load Theory to derive deductive design objectives. The Design Cycle operationalizes these objectives through nested Build-Intervene-Evaluate loops. The study contributes a novel IT artifact, a validated design for LLM evaluation, benefiting research at the intersection of AI, psychology, cognitive science, and the social and behavioral sciences.