Rethinking Loss Reweighting for Imbalance Learning as an Inverse Problem: A Neural Collapse Point of View

2026-05-11Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors study how to better balance learning in situations where some classes appear much less often than others (long-tailed classification). They suggest that an ideal way to weight losses across classes is to aim for equal average loss per class, inspired by a mathematical concept called Neural Collapse. By viewing loss reweighting as a problem of figuring out the best class weights dynamically, their method adjusts these weights to reach this equal-loss goal. Experiments show their approach better matches the ideal geometry and improves performance compared to existing methods across various datasets.

long-tailed classificationloss reweightingNeural CollapseEquiangular Tight Frameinverse problemclass imbalancemachine learningclassification lossdeep learning
Authors
Jinping Wang, Zixin Tong, Zhiwu Xie, Zhiqiang Gao
Abstract
Loss reweighting is a widely used strategy for long-tailed classification, but existing reweighting strategies often rely on heuristics and rarely define a well-specified target. Inspired by Neural Collapse (NC), the ideal simplex Equiangular Tight Frame (ETF) terminal geometry suggests equal per-class average loss as a reasonable target for reweighting. Based on the ideal equal loss objective, we consider loss reweighting as an inverse problem and propose an inverse-view reweighting strategy that infers class weights dynamically to match this ideal objective. Empirically, NC metrics suggest our method can effectively reduce the loss imbalance coefficient and closer alignment with NC geometry while consistently outperforming strong long-tailed baselines on different datasets.