Enhancing Structural Mapping with LLM-derived Abstractions for Analogical Reasoning in Narratives

2026-03-31Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors studied how computers can better understand and compare stories by using analogies, a type of reasoning humans use to see similarities between different narratives. They created a system called YARN that breaks stories into smaller parts, simplifies these parts into different levels of meaning, and then matches these parts across stories to find connections. Their experiments showed that adding these simplified versions helps the system perform as well as or better than existing methods. The authors also identified challenges like choosing the right level of detail and understanding hidden causes in stories. They shared their code to encourage further research.

Analogical reasoningNarrative structuresLarge language models (LLMs)Structural mappingAbstractionFramingCausalityModular frameworkStory decompositionError analysis
Authors
Mohammadhossein Khojasteh, Yifan Jiang, Stefano De Giorgis, Frank van Harmelen, Filip Ilievski
Abstract
Analogical reasoning is a key driver of human generalization in problem-solving and argumentation. Yet, analogies between narrative structures remain challenging for machines. Cognitive engines for structural mapping are not directly applicable, as they assume pre-extracted entities, whereas LLMs' performance is sensitive to prompt format and the degree of surface similarity between narratives. This gap motivates a key question: What is the impact of enhancing structural mapping with LLM-derived abstractions on their analogical reasoning ability in narratives? To that end, we propose a modular framework named YARN (Yielding Abstractions for Reasoning in Narratives), which uses LLMs to decompose narratives into units, abstract these units, and then passes them to a mapping component that aligns elements across stories to perform analogical reasoning. We define and operationalize four levels of abstraction that capture both the general meaning of units and their roles in the story, grounded in prior work on framing. Our experiments reveal that abstractions consistently improve model performance, resulting in competitive or better performance than end-to-end LLM baselines. Closer error analysis reveals the remaining challenges in abstraction at the right level, in incorporating implicit causality, and an emerging categorization of analogical patterns in narratives. YARN enables systematic variation of experimental settings to analyze component contributions, and to support future work, we make the code for YARN openly available.