go-$m$HC: Direct Parameterization of Manifold-Constrained Hyper-Connections via Generalized Orthostochastic Matrices

2026-04-02Machine Learning

Machine LearningComputation and Language
AI summary

The authors address the problem of mixing information streams in models using doubly stochastic matrices, which are hard to parameterize efficiently. They propose a new exact method based on generalized orthostochastic matrices that balances expressiveness and computational efficiency better than previous methods. Their approach, called go-mHC, can be combined with other efficient techniques and shows faster learning and better coverage of possible mixing patterns. They demonstrate its effectiveness on synthetic tasks and in a GPT-style language model, suggesting it could help scale model capacity along a new dimension.

Doubly stochastic matrixBirkhoff polytopeOrthostochastic matrixKronecker factorizationResidual streamsLayer connectivityManifold-constrained optimizationSpectral analysisGPTFLOPs
Authors
Torque Dandachi, Sophia Diggs-Galligan
Abstract
Doubly stochastic matrices enable learned mixing across residual streams, but parameterizing the set of doubly stochastic matrices (the Birkhoff polytope) exactly and efficiently remains an open challenge. Existing exact methods scale factorially with the number of streams ($d$), while Kronecker-factorized approaches are efficient but expressivity-limited. We introduce a novel exact parameterization grounded in the theory of generalized orthostochastic matrices, which scales as $\mathcal{O}(d^3)$ and exposes a single hyperparameter $s$ which continuously interpolates between a computationally efficient boundary and the fully expressive Birkhoff polytope. Building on Manifold-Constrained Hyper-Connections ($m$HC), a framework for learned dynamic layer connectivity, we instantiate this parameterization in go-$m$HC. Our method composes naturally with Kronecker-factorized methods, substantially recovering expressivity at similar FLOP costs. Spectral analysis indicates that go-$m$HC fills the Birkhoff polytope far more completely than Kronecker-factorized baselines. On synthetic stream-mixing tasks, go-$m$HC achieves the minimum theoretical loss while converging up to $10\times$ faster. We validate our approach in a 30M parameter GPT-style language model. The expressivity, efficiency, and exactness of go-$m$HC offer a practical avenue for scaling $d$ as a new dimension of model capacity.