Phase Transitions in the Fluctuations of Functionals of Random Neural Networks
2026-04-21 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study how certain mathematical features behave in very wide deep neural networks when their depth becomes very large. They find that the outcomes depend on special points (fixed points) in a function describing how layers relate (covariance). Depending on these points, the results fall into three categories: staying similar to a related random field, becoming a normal random variable, or following a more complex distribution tied to something called Wiener chaos. Their work uses known mathematical tools but also introduces a new way to analyze these behaviors using fixed points of an operator linked to covariance.
Gaussian processinfinitely-wide neural networkcovariance functionfixed pointslimit theoremsHermite expansionsWiener chaosStein-Malliavin techniquesDiagram Formula
Authors
Simmaco Di Lillo, Leonardo Maini, Domenico Marinucci
Abstract
We establish central and non-central limit theorems for sequences of functionals of the Gaussian output of an infinitely-wide random neural network on the d-dimensional sphere . We show that the asymptotic behaviour of these functionals as the depth of the network increases depends crucially on the fixed points of the covariance function, resulting in three distinct limiting regimes: convergence to the same functional of a limiting Gaussian field, convergence to a Gaussian distribution, convergence to a distribution in the Qth Wiener chaos. Our proofs exploit tools that are now classical (Hermite expansions, Diagram Formula, Stein-Malliavin techniques), but also ideas which have never been used in similar contexts: in particular, the asymptotic behaviour is determined by the fixed-point structure of the iterative operator associated with the covariance, whose nature and stability governs the different limiting regimes.