Learning to Rotate: Temporal and Semantic Rotary Encoding for Sequential Modeling

2026-04-27Artificial Intelligence

Artificial Intelligence
AI summary

The authors point out that in Transformer models, the way positional information is added (using Rotary Positional Embeddings) has been treated as fixed and simple. They suggest this rotation space can be made learnable and dynamic, like adding a new mathematical dimension similar to how complex numbers add an imaginary part. Their proposed method, SIREN-RoPE, uses special networks to include time and category info in this rotation space. Testing on a large news recommendation system showed improved performance with little extra cost. They encourage others to explore this rotation space as a valuable new feature in attention models.

TransformerRotary Positional Embeddings (RoPE)attention mechanismembedding spacecomplex numbersSinusoidal Representation Network (SIREN)positional encodinggenerative recommenderranking modelcalibration
Authors
Hailing Cheng, Daqi Sun, Xinyu Lu
Abstract
Every Transformer architecture dedicates enormous capacity to learning rich representations in semantic embedding space -- yet the rotation manifold acted upon by Rotary Positional Embeddings (RoPE) has been treated as a fixed, hand-crafted structure, populated only by discrete ordinal indices. We argue that this rotation space is a largely overlooked second dimension of expressivity in the attention mechanism, one whose systematic exploration may open a new door for attention-based architectures. The analogy to complex numbers is instructive: just as introducing the imaginary axis -- orthogonal to and independent of the real line -- unlocked new algebraic structure once believed impossible, treating the rotation manifold as a learnable, signal-conditioned space opens an orthogonal degree of freedom in attention. In this framing, the token embedding encodes the semantic (real) component of a representation -- what a token means -- while the rotation encodes its dynamic (imaginary) component -- how it relates to every other token across time, position, and context. We introduce SIREN-RoPE, a concrete instantiation of this idea, which populates the rotation dimension with heterogeneous signals -- continuous timestamps, cyclical temporal patterns, and categorical metadata -- via a dual-branch Sinusoidal Representation Network (SIREN). As a proof of concept, we evaluate on a production-scale news feed dataset from a major social network using a generative recommender as the ranking model, demonstrating that activating this hidden dimension yields consistent improvements across calibration and ranking objectives with negligible computational overhead. We invite the community to view the rotation space not as a solved positional-encoding detail, but as an untapped axis whose rich structure may prove as consequential for attention as the imaginary unit proved for algebra.