DetRefiner: Model-Agnostic Detection Refinement with Feature Fusion Transformer

2026-05-11Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors introduce DetRefiner, a simple add-on that improves open-vocabulary object detection, which is the task of finding both known and unknown objects in images. DetRefiner combines overall image information with detailed local parts using a Transformer model to better guess what objects are present and how confident the detector should be. It works independently from existing detection models and only adjusts their confidence scores without needing to change the original model. Tests show DetRefiner consistently boosts detection accuracy for new object categories across several benchmark datasets.

open-vocabulary object detectionglobal featureslocal featuresTransformer encoderconfidence calibrationfoundational modelsDINOv3COCO datasetLVIS datasetPascal VOC
Authors
Soichiro Okazaki, Tatsuya Sasaki, Hiroki Ohashi
Abstract
Open-vocabulary object detection (OVOD) aims to detect both seen and unseen categories, yet existing methods often struggle to generalize to novel objects due to limited integration of global and local contextual cues. We propose DetRefiner, a simple yet effective plug-and-play framework that learns to fuse global and local features to refine open-vocabulary detection. DetRefiner processes global image features and patch-level image features from foundational models (e.g., DINOv3) through a lightweight Transformer encoder. The encoder produces a class vector capturing image-level attributes and patch vectors representing local region attributes, from which attribute reliability is inferred to recalibrate the base model's confidence. Notably, DetRefiner is trained independently of the base OVOD model, requiring neither access to its internal features nor retraining. At inference, it operates solely on the base detector's predictions, producing auxiliary calibration scores that are merged with the base detector's scores to yield the final refined confidence. Despite this simplicity, DetRefiner consistently enhances multiple OVOD models across COCO, LVIS, ODinW13, and Pascal VOC, achieving gains of up to +10.1 AP on novel categories. These results highlight that learning to fuse global and local representations offers a powerful and general mechanism for advancing open-world object detection. Our codes and models are available at https://github.com/hitachi-rd-cv/detrefiner.