🤖 AI Summary
This work addresses the issue of uneven utilization of fixed learnable queries in DETR and its variants by proposing a dynamic query generation mechanism. The approach models shared latent patterns and adaptively generates queries weighted by image content, enabling more efficient and context-aware query usage. Furthermore, it introduces a localization-classification consistency-driven, quality-aware one-to-many positive sample assignment strategy to enhance supervision balance and semantic interpretability. Evaluated on standard benchmarks including COCO and Cityscapes, the method consistently improves mAP by 1.5%–4.2% across various DETR backbones and reveals that dynamically generated queries exhibit semantically meaningful clustering across object categories.
📝 Abstract
Detection Transformer (DETR) has redefined object detection by casting it as a set prediction task within an end-to-end framework. Despite its elegance, DETR and its variants still rely on fixed learnable queries and suffer from severe query utilization imbalance, which limits adaptability and leaves the model capacity underused. We propose PaQ-DETR (Pattern and Quality-Aware DETR), a unified framework that enhances both query adaptivity and supervision balance. It learns a compact set of shared latent patterns capturing global semantics and dynamically generates image-specific queries through content-conditioned weighting. In parallel, a quality-aware one-to-many assignment strategy adaptively selects positive samples based on localizatio-classification consistency, enriching supervision and promoting balanced query optimization. Experiments on COCO, CityScapes, and other benchmarks show consistent gains of 1.5%-4.2% mAP across DETR backbones, including ResNet and Swin-Transformer. Beyond accuracy improvement, our method provides interpretable insights into how dynamic patterns cluster semantically across object categories.