Fast Spatial Memory with Elastic Test-Time Training

2026-04-08Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionGraphicsMachine Learning
AI summary

The authors identified a problem in a method called Large Chunk Test-Time Training (LaCT), which struggles to handle long 3D sequences without forgetting earlier information or overfitting. They introduced Elastic Test-Time Training, which helps stabilize learning by keeping track of an 'anchor' state that balances remembering old knowledge and learning new details. Building on this, they created Fast Spatial Memory (FSM), a model that efficiently learns and reconstructs 4D scenes from long observation sequences. Their experiments show FSM can adapt quickly and produce high-quality reconstructions in a more scalable way than previous methods.

Large Chunk Test-Time Training (LaCT)Elastic Weight ConsolidationFisher-weighted elastic priorFast Spatial Memory (FSM)3D/4D reconstructionSpatiotemporal representationCatastrophic forgettingOverfittingExponential moving averageCamera-interpolation shortcut
Authors
Ziqiao Ma, Xueyang Yu, Haoyu Zhen, Yuncong Yang, Joyce Chai, Chuang Gan
Abstract
Large Chunk Test-Time Training (LaCT) has shown strong performance on long-context 3D reconstruction, but its fully plastic inference-time updates remain vulnerable to catastrophic forgetting and overfitting. As a result, LaCT is typically instantiated with a single large chunk spanning the full input sequence, falling short of the broader goal of handling arbitrarily long sequences in a single pass. We propose Elastic Test-Time Training inspired by elastic weight consolidation, that stabilizes LaCT fast-weight updates with a Fisher-weighted elastic prior around a maintained anchor state. The anchor evolves as an exponential moving average of past fast weights to balance stability and plasticity. Based on this updated architecture, we introduce Fast Spatial Memory (FSM), an efficient and scalable model for 4D reconstruction that learns spatiotemporal representations from long observation sequences and renders novel view-time combinations. We pre-trained FSM on large-scale curated 3D/4D data to capture the dynamics and semantics of complex spatial environments. Extensive experiments show that FSM supports fast adaptation over long sequences and delivers high-quality 3D/4D reconstruction with smaller chunks and mitigating the camera-interpolation shortcut. Overall, we hope to advance LaCT beyond the bounded single-chunk setting toward robust multi-chunk adaptation, a necessary step for generalization to genuinely longer sequences, while substantially alleviating the activation-memory bottleneck.