Duality for the Adversarial Total Variation

2026-04-20Machine Learning

Machine Learning
AI summary

The authors explore a way to understand and improve how binary classifiers (which decide between two options) learn to resist being tricked by tricky examples called adversarial attacks. They do this by looking at the problem through a mathematical lens involving something called nonlocal total variation, which helps regularize the learning process. Using advanced math (duality techniques), they describe the detailed structure of this total variation and how it behaves, offering insights both for abstract spaces and more familiar Euclidean spaces. Their work helps clarify the underpinnings of this regularization method, potentially guiding better training strategies in the future.

adversarial trainingbinary classifiersnonlocal total variationregularized risk minimizationsubdifferentialdualitymetric spacesEuclidean domainsnonlocal gradientnonlocal divergence
Authors
Leon Bungert, Lucas Schmitt
Abstract
Adversarial training of binary classifiers can be reformulated as regularized risk minimization involving a nonlocal total variation. Building on this perspective, we establish a characterization of the subdifferential of this total variation using duality techniques. To achieve this, we derive a dual representation of the nonlocal total variation and a related integration of parts formula, involving a nonlocal gradient and divergence. We provide such duality statements both in the space of continuous functions vanishing at infinity on proper metric spaces and for the space of essentially bounded functions on Euclidean domains. Furthermore, under some additional conditions we provide characterizations of the subdifferential in these settings.