Not-So-Strange Love: Language Models and Generative Linguistic Theories are More Compatible than They Appear

2026-05-11Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

Futrell and Mahowald suggest that neural language models support linguistic ideas based on how language is used and learned over time. The authors of this paper argue that these models can also represent language theories focused on formal rules and structures, like those from the generative tradition. This means language models could help test and connect different ways of understanding language, combining practical usage and formal grammar approaches.

neural language modelsusage-based linguisticsgenerative grammarformal structureslinguistic theorygradient modelslanguage acquisitioncomputational linguisticsFutrell and Mahowald
Authors
R. Thomas McCoy
Abstract
Futrell and Mahowald (2025) frame the success of neural language models (LMs) as supporting gradient, usage-based linguistic theories. I argue that LMs can also instantiate theories based on formal structures - the types of theories seen in the generative tradition. This argument expands the space of theories that can be tested with LMs, potentially enabling reconciliations between usage-based and generative accounts.