Geometry-aware similarity metrics for neural representations on Riemannian and statistical manifolds
2026-03-30 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors introduce a new method called metric similarity analysis (MSA) to compare how neural networks represent information. Unlike previous methods that look at the overall shape of data in space, MSA examines the deeper, intrinsic geometry based on the idea that neural data lies on curved surfaces called manifolds. This approach helps reveal differences in how networks learn, behave over time, and work in advanced models like diffusion models. Their method is grounded in solid math and can be used broadly to better understand neural computations.
neural networksrepresentational geometrymanifold hypothesisRiemannian geometrymetric similarity analysisintrinsic geometrydeep learningnonlinear dynamicsdiffusion models
Authors
N Alex Cayco Gajic, Arthur Pellegrino
Abstract
Similarity measures are widely used to interpret the representational geometries used by neural networks to solve tasks. Yet, because existing methods compare the extrinsic geometry of representations in state space, rather than their intrinsic geometry, they may fail to capture subtle yet crucial distinctions between fundamentally different neural network solutions. Here, we introduce metric similarity analysis (MSA), a novel method which leverages tools from Riemannian geometry to compare the intrinsic geometry of neural representations under the manifold hypothesis. We show that MSA can be used to i) disentangle features of neural computations in deep networks with different learning regimes, ii) compare nonlinear dynamics, and iii) investigate diffusion models. Hence, we introduce a mathematically grounded and broadly applicable framework to understand the mechanisms behind neural computations by comparing their intrinsic geometries.