What Concepts Lie Within? Detecting and Suppressing Risky Content in Diffusion Transformers

2026-05-11Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionCryptography and Security
AI summary

The authors studied new text-to-image models called Diffusion Transformers (DiTs) that create pictures from text but can sometimes generate risky content like violent or copyrighted images. They found that certain parts of the model, called attention heads, are especially sensitive to specific concepts, allowing them to spot and reduce unwanted content. Using this insight, they created a method named AHV-D&S that works during image generation to detect and soften risky content without extra training. Their tests show this approach effectively lowers harmful outputs while keeping good image quality and works well against tricky or different prompts.

Text-to-Image ModelsDiffusion TransformersAttention HeadsSemantic InjectionVisual SynthesisInference-time SafeguardAttention Head Vector (AHV)Denoising StepsAdaptive SuppressionAdversarial Prompts
Authors
Chenyu Zhang, Lanjun Wang, Yueyang Cheng, Ruidong Chen, Wenhui Li, An-an Liu
Abstract
The rise of text-to-image (T2I) models has increasingly raised concerns regarding the generation of risky content, such as sexual, violent, and copyright-protected images, highlighting the need for effective safeguards within the models themselves. Although existing methods have been proposed to eliminate risky concepts from T2I models, they are primarily developed for earlier U-Net architectures, leaving the state-of-the-art Diffusion-Transformer-based T2I models inadequately protected. This gap stems from a fundamental architectural shift: Diffusion Transformers (DiTs) entangle semantic injection and visual synthesis via joint attention, which makes it difficult to isolate and erase risky content within the generation. To bridge this gap, we investigate how semantic concepts are represented in DiTs and discover that attention heads exhibit concept-specific sensitivity. This property enables both the detection and suppression of risky content. Building on this discovery, we propose AHV-D\&S, a training-free inference-time safeguard for image generation in DiTs. Specifically, AHV-D\&S quantifies each textual token's sensitivity across all attention heads as an Attention Head Vector (AHV), which serves as a discriminative signature for detecting risky generation tendencies. In the inference stage, we propose a momentum-based strategy to dynamically track token-wise AHVs across denoising steps, and a sensitivity-guided adaptive suppression strategy that suppresses the attention weights of identified risky tokens based on head-specific risk scores. Extensive experiments demonstrate that AHV-D\&S effectively suppresses sexual, copyrighted-style, and various harmful content while preserving visual quality, and further exhibits strong robustness against adversarial prompts and transferability across different DiT-based T2I models.