DetPO: In-Context Learning with Multi-Modal LLMs for Few-Shot Object Detection
2026-03-24 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors studied multi-modal large language models (MLLMs) that can detect objects in images but found these models struggle with new kinds of objects or images they weren't trained on. They discovered that giving examples in the prompt doesn’t help as much as just listing class names for detection tasks. To improve this, they created a method called Detection Prompt Optimization (DetPO), which fine-tunes text prompts without changing the model itself. This technique helps the models become better at detecting objects using only a few examples and works better than previous methods. Their approach showed clear improvements on different datasets and models.
Multi-Modal Large Language ModelsObject DetectionOut-of-Distribution GeneralizationIn-context PromptingFew-shot LearningBlack-box OptimizationGradient-free OptimizationTest-time AdaptationRoboflow20-VLLVIS Dataset
Authors
Gautam Rajendrakumar Gare, Neehar Peri, Matvei Popov, Shruti Jain, John Galeotti, Deva Ramanan
Abstract
Multi-Modal LLMs (MLLMs) demonstrate strong visual grounding capabilities on popular object detection benchmarks like OdinW-13 and RefCOCO. However, state-of-the-art models still struggle to generalize to out-of-distribution classes, tasks and imaging modalities not typically found in their pre-training. While in-context prompting is a common strategy to improve performance across diverse tasks, we find that it often yields lower detection accuracy than prompting with class names alone. This suggests that current MLLMs cannot yet effectively leverage few-shot visual examples and rich textual descriptions for object detection. Since frontier MLLMs are typically only accessible via APIs, and state-of-the-art open-weights models are prohibitively expensive to fine-tune on consumer-grade hardware, we instead explore black-box prompt optimization for few-shot object detection. To this end, we propose Detection Prompt Optimization (DetPO), a gradient-free test-time optimization approach that refines text-only prompts by maximizing detection accuracy on few-shot visual training examples while calibrating prediction confidence. Our proposed approach yields consistent improvements across generalist MLLMs on Roboflow20-VL and LVIS, outperforming prior black-box approaches by up to 9.7%. Our code is available at https://github.com/ggare-cmu/DetPO