When Normality Shifts: Risk-Aware Test-Time Adaptation for Unsupervised Tabular Anomaly Detection
2026-05-11 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors propose RTTAD, a method to better detect unusual data points in tables without labeled examples. They improve the process by combining learning during training and careful adaptation during testing, avoiding mistakes caused by wrongly treating anomalies as normal data. Their method uses a two-step approach: first, learning strong normal patterns, and second, selectively updating the model at test time with confident normal samples. Tests on 15 datasets show RTTAD works better than previous methods.
unsupervised anomaly detectiontabular datatest-time adaptationcontrastive learningpseudo-labelingk-nearest neighborsrepresentation learningnormality shiftembedding space
Authors
Wei Huang, Hezhe Qiao, Kailai Zhang, Zaisheng Ye, Yu-Ming Shang, Xiangling Fu
Abstract
Unsupervised tabular anomaly detection methods typically learn feature patterns from normal samples during training and subsequently identify samples that deviate from these patterns as anomalies during testing. However, in practical scenarios, the limited scale and diversity of training data often lead to an incomplete characterization of normal patterns. While test-time adaptation offers a remedy, its isolated focus on test-time optimization ignores the critical synergy with training-phase learning. Furthermore, indiscriminate adaptation to unlabeled test data inevitably triggers anomaly contamination, preventing the model from fully realizing its discriminative capability between normal and anomalous samples. To address these issues, we propose RTTAD, a Risk-aware Test-time adaptation method for unsupervised Tabular Anomaly Detection. RTTAD holistically tackles normality shifts via a synergistic two-stage mechanism. During training, collaborative dual-task learning captures multi-level representations to establish a robust normal prior. During testing, a Test-Time Contrastive Learning (TTCL) module explicitly accounts for adaptation risk by selectively updating the model using high-confidence pseudo-normal samples while constraining anomalous ones. Additionally, TTCL incorporates a k-nearest neighbor-based contrastive objective to refine embedding distributions, thereby further enhancing the model's discriminative capacity. Extensive experiments on 15 tabular datasets demonstrate that RTTAD achieves state-of-the-art overall detection performance.