Steerable Visual Representations

2026-04-02Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors created a new way to make image features that can be guided by natural language, called Steerable Visual Representations. Unlike previous methods that either focus on obvious parts of images or rely heavily on language, their approach combines text with image processing early on to better direct attention to less obvious details. They tested their method with new benchmarks and found it can accurately focus on specific objects while still working well for general tasks. Additionally, their approach works well even on new, unseen types of problems without needing extra training.

Vision Transformers (ViTs)DINOv2MAE (Masked Autoencoder)Multimodal Large Language Models (LLMs)Visual RepresentationsCross-AttentionEarly FusionAnomaly DetectionZero-Shot GeneralizationRepresentation Steering
Authors
Jona Ruthardt, Manu Gaur, Deva Ramanan, Makarand Tapaswi, Yuki M. Asano
Abstract
Pretrained Vision Transformers (ViTs) such as DINOv2 and MAE provide generic image features that can be applied to a variety of downstream tasks such as retrieval, classification, and segmentation. However, such representations tend to focus on the most salient visual cues in the image, with no way to direct them toward less prominent concepts of interest. In contrast, Multimodal LLMs can be guided with textual prompts, but the resulting representations tend to be language-centric and lose their effectiveness for generic visual tasks. To address this, we introduce Steerable Visual Representations, a new class of visual representations, whose global and local features can be steered with natural language. While most vision-language models (e.g., CLIP) fuse text with visual features after encoding (late fusion), we inject text directly into the layers of the visual encoder (early fusion) via lightweight cross-attention. We introduce benchmarks for measuring representational steerability, and demonstrate that our steerable visual features can focus on any desired objects in an image while preserving the underlying representation quality. Our method also matches or outperforms dedicated approaches on anomaly detection and personalized object discrimination, exhibiting zero-shot generalization to out-of-distribution tasks.