A Stability Benchmark of Generative Regularizers for Inverse Problems
2026-05-11 • Machine Learning
Machine Learning
AI summaryⓘ
The authors study how well generative diffusion models work for solving inverse imaging problems, especially in tricky real-world cases like medical imaging. They focus on how stable and reliable these methods are when things aren’t perfect, such as noisy data or incorrect models. By comparing these generative methods to traditional optimization-based techniques, the authors identify situations where generative approaches succeed and where they might struggle or be less reliable. Their work helps clarify when these new tools are useful and when caution is needed.
generative priorsdiffusion modelsinverse problemsimage reconstructionstabilityregularizationvariational methodsout-of-distribution robustnessforward operatornoise model
Authors
Alexander Denker, Johannes Hertrich, Sebastian Neumayer
Abstract
Generative (diffusion) priors demonstrate remarkable performance in addressing inverse problems in imaging. Yet, for scientific and medical imaging, it is crucial that reconstruction techniques remain stable and reliable under imperfect settings. Typical definitions of stability encompass the notion of ''convergent regularization'', robustness to out-of-distribution data, and to inaccuracies in the forward operator or noise model. We evaluate these properties numerically. Furthermore, we benchmark generative approaches against modern optimization-based methods inspired by the widely used variational techniques. Our results give insights for which settings and applications generative priors can deliver state-of-the-art reconstructions, and on those in which they fall short or may even be problematic.