Just Zoom In: Cross-View Geo-Localization via Autoregressive Zooming

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial Intelligence
AI summary

The authors studied how to find a street-view photo’s exact location by matching it to a satellite photo. Instead of the usual method that compares many images at once, they created a step-by-step zooming approach over a large map to home in on the spot. Their method avoids some common problems and works better in real-world conditions, showing improved accuracy on a new realistic test set. This shows that zooming in gradually on a map can help find locations more precisely.

cross-view geo-localizationstreet-view imagerysatellite imageryimage retrievalcontrastive learningautoregressive modelspatial reasoningRecall@1hard negative miningmap coverage
Authors
Yunus Talha Erzurumlu, Jiyong Kwag, Alper Yilmaz
Abstract
Cross-view geo-localization (CVGL) estimates a camera's location by matching a street-view image to geo-referenced overhead imagery, enabling GPS-denied localization and navigation. Existing methods almost universally formulate CVGL as an image-retrieval problem in a contrastively trained embedding space. This ties performance to large batches and hard negative mining, and it ignores both the geometric structure of maps and the coverage mismatch between street-view and overhead imagery. In particular, salient landmarks visible from the street view can fall outside a fixed satellite crop, making retrieval targets ambiguous and limiting explicit spatial inference over the map. We propose Just Zoom In, an alternative formulation that performs CVGL via autoregressive zooming over a city-scale overhead map. Starting from a coarse satellite view, the model takes a short sequence of zoom-in decisions to select a terminal satellite cell at a target resolution, without contrastive losses or hard negative mining. We further introduce a realistic benchmark with crowd-sourced street views and high-resolution satellite imagery that reflects real capture conditions. On this benchmark, Just Zoom In achieves state-of-the-art performance, improving Recall@1 within 50 m by 5.5% and Recall@1 within 100 m by 9.6% over the strongest contrastive-retrieval baseline. These results demonstrate the effectiveness of sequential coarse-to-fine spatial reasoning for cross-view geo-localization.