Where Do Vision-Language Models Fail? World Scale Analysis for Image Geolocalization
2026-04-17 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors tested different Vision-Language Models (VLMs) to see how well they can guess the country where a photo was taken, using only the image and no extra location data. They did this without training the models specifically for geography, just by giving them prompts to make predictions. Their experiments on three different datasets showed that some models can roughly tell the country, but many struggle to identify detailed location clues. This work is the first to compare modern VLMs specifically for country-level geolocalization and points out where these models work well and where they fall short.
Image GeolocalizationVision-Language ModelsZero-shot LearningMultimodal ReasoningPromptingCountry-level PredictionGround-view ImageryGeographic InferenceVisual LocalizationPlace Recognition
Authors
Siddhant Bharadwaj, Ashish Vashist, Fahimul Aleem, Shruti Vyas
Abstract
Image geolocalization has traditionally been addressed through retrieval-based place recognition or geometry-based visual localization pipelines. Recent advances in Vision-Language Models (VLMs) have demonstrated strong zero-shot reasoning capabilities across multimodal tasks, yet their performance in geographic inference remains underexplored. In this work, we present a systematic evaluation of multiple state-of-the-art VLMs for country-level image geolocalization using ground-view imagery only. Instead of relying on image matching, GPS metadata, or task-specific training, we evaluate prompt-based country prediction in a zero-shot setting. The selected models are tested on three geographically diverse datasets to assess their robustness and generalization ability. Our results reveal substantial variation across models, highlighting the potential of semantic reasoning for coarse geolocalization and the limitations of current VLMs in capturing fine-grained geographic cues. This study provides the first focused comparison of modern VLMs for country-level geolocalization and establishes a foundation for future research at the intersection of multimodal reasoning and geographic understanding.