Learning to Align Generative Appearance Priors for Fine-grained Image Retrieval

2026-05-11Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors address a problem in fine-grained image retrieval, where models usually learn from known categories but struggle to generalize to new ones. They propose GAPan, a method that shifts focus from category labels to learning detailed appearance features using a special invertible model called normalizing flows. This approach creates detailed feature distributions for known categories and uses them to guide the model in recognizing unseen categories better. Their experiments show that GAPan improves retrieval performance on both fine- and coarse-grained image datasets.

Fine-grained image retrievalDiscriminative embeddingsNormalizing flowsInvertible density modelClass-conditional Gaussian priorLikelihood estimationAppearance modelingFeature spaceGeneralizationPrior-driven alignment
Authors
Shijie Wang, Yadan Luo, Zijian Wang, Xin Yu, Zi Huang
Abstract
Fine-grained image retrieval (FGIR) typically relies on supervision from seen categories to learn discriminative embeddings for retrieving unseen categories. However, such supervision often biases retrieval models toward the semantics of seen categories rather than the underlying appearance characteristics that generalize across categories, thereby limiting retrieval performance on unseen categories. To tackle this, we propose GAPan, a Generative Appearance Prior alignment network that reformulates the learning objective from category prediction toward appearance modeling. Technically, GAPan treats retrieval features with an invertible density model based on normalizing flows. In the forward direction, the flow maps all instance features into a latent density space, where each seen category is modeled by a class-conditional Gaussian prior and optimized via exact likelihood estimation. This formulation preserves richer appearance details by leveraging the invertible property of the flows. In the reverse direction, samples from the high-density regions of these learned priors are mapped back to the feature space to produce appearance-aware anchors that reflect intra-category variation. These anchors supervise a prior-driven alignment objective that aligns retrieval embeddings with category-specific appearance distributions, thereby improving generalization to unseen categories. Evaluations demonstrate that our GAPan achieves state-of-the-art performance on both widely-used fine- and coarse-grained benchmarks.