Scaling Vision Models Does Not Consistently Improve Localisation-Based Explanation Quality
2026-05-11 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial Intelligence
AI summaryⓘ
The authors studied whether making AI models bigger and more complex improves how well their explanations match the parts of an image that actually matter. They tested different sized models and explanation methods on datasets with known image masks. They found that bigger models do not consistently give better explanations, and sometimes smaller models do just as well or better. Pretraining the models improved prediction but not explanation quality. The authors suggest that simply checking how well a model predicts doesn’t guarantee its explanations are accurate, so explanation quality should be checked too, especially for important uses.
Artificial IntelligenceExplainable AIResNetDenseNetVision TransformerPost-hoc ExplanationImage SegmentationModel PretrainingLocalization MetricsPredictive Performance
Authors
Mateusz Cedro, Marcin Chlebus
Abstract
Artificial intelligence models are increasingly scaled to improve predictive accuracy, yet it remains unclear whether scale improves the quality of post-hoc explanations. We investigate this relationship by evaluating 11 computer vision models representing increasing levels of depth and complexity within the ResNet, DenseNet, and Vision Transformer families, trained from scratch or pretrained, across three image datasets with ground-truth segmentation masks. For each model, we generate explanations using five post-hoc explainable AI methods and quantify mask alignment using two localisation metrics: Relevance Rank Accuracy (Arras et al., 2022) and the proposed Dual-Polarity Precision, which measures positive attributions inside the class mask and negative attributions outside it. Across datasets and methods, increasing architectural depth and parameter count does not improve explanation quality in most statistical comparisons, and smaller models often match or exceed deeper variants. While pretraining typically improves predictive performance and increases the dependence of explanations on learned weights, it does not consistently increase localisation scores. We also observe scenarios in which models achieve strong predictive performance while localisation precision is near zero, suggesting that performance metrics alone may not indicate whether predictions are based on the annotated regions. These results indicate that larger models do not reliably provide higher-quality explanations, and that explainability should therefore be assessed explicitly during model selection for safety-sensitive deployments.