🤖 AI Summary
Current visual place recognition (VPR) models are typically trained on single datasets, resulting in limited generalization. While multi-dataset joint training improves universality, it often causes saturation in feature aggregation layers due to inter-domain discrepancies. To address this, we propose Query-Adaptive Aggregation (QAA): a learnable query serves as a reference codebook, and features are dynamically weighted during aggregation via Cross-Query Similarity (CS), thereby enhancing descriptor discriminability and representational capacity. Built upon a Transformer architecture, QAA integrates multi-dataset joint training with attention-based visualization analysis. Evaluated across multiple benchmarks, QAA achieves peak performance comparable to dataset-specific models while substantially improving balanced cross-dataset generalization. Our method establishes new state-of-the-art (SOTA) results, demonstrating both effectiveness and robustness under domain shifts.
📝 Abstract
Deep learning methods for Visual Place Recognition (VPR) have advanced significantly, largely driven by large-scale datasets. However, most existing approaches are trained on a single dataset, which can introduce dataset-specific inductive biases and limit model generalization. While multi-dataset joint training offers a promising solution for developing universal VPR models, divergences among training datasets can saturate limited information capacity in feature aggregation layers, leading to suboptimal performance. To address these challenges, we propose Query-based Adaptive Aggregation (QAA), a novel feature aggregation technique that leverages learned queries as reference codebooks to effectively enhance information capacity without significant computational or parameter complexity. We show that computing the Cross-query Similarity (CS) between query-level image features and reference codebooks provides a simple yet effective way to generate robust descriptors. Our results demonstrate that QAA outperforms state-of-the-art models, achieving balanced generalization across diverse datasets while maintaining peak performance comparable to dataset-specific models. Ablation studies further explore QAA's mechanisms and scalability. Visualizations reveal that the learned queries exhibit diverse attention patterns across datasets. Code will be publicly released.