DA-SegFormer: Damage-Aware Semantic Segmentation for Fine-Grained Disaster Assessment

2026-05-11Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionMachine Learning
AI summary

The authors address the challenge of accurately identifying different levels of damage from drone images after disasters. They improved a computer vision model called SegFormer by making it focus more on rare damage types and preserving image details. Their approach, called DA-SegFormer, uses special sampling and training techniques to better recognize minor and major roof damage. Testing on a disaster dataset showed their method outperforms previous models, especially in detecting critical damages.

Damage assessmentUAV imagerySegFormerClass-Aware SamplingOnline Hard Example MiningDice LossmIoURescueNet datasetResolution-preserving inference
Authors
Kevin Zhu, William Tang, Raphael Hay Tene, Zesheng Liu, Nhut Le, Maryam Rahnemoonfar
Abstract
Rapid and accurate damage assessment following natural disasters is critical for effective emergency response. However, identifying fine-grained damage levels (e.g., distinguishing minor from major roof damage) in UAV imagery remains challenging due to the degradation of texture cues during resizing and extreme class imbalance. We propose DA-SegFormer, a damage-aware adaptation of the SegFormer architecture optimized for high-resolution disaster imagery. Our method introduces a Class-Aware Sampling strategy to guarantee exposure to rare damage features, and it integrates Online Hard Example Mining (OHEM) with Dice Loss to dynamically focus on underrepresented classes. In addition, we employ a resolution-preserving inference protocol that maintains native texture details. Evaluated on the RescueNet dataset, DA-SegFormer achieves 74.61\% mIoU, outperforming the baseline by 2.55\%. Notably, our improvements yield double-digit gains in critical damage classes: Minor Damage (+11.7%) and Major Damage (+21.3%).