YOLO-RD: Introducing Relevant and Compact Explicit Knowledge to YOLO by Retriever-Dictionary

πŸ“… 2024-10-20
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing object detection models (e.g., YOLO) rely heavily on single-frame inputs and neglect dataset-level global knowledge, limiting their generalization and accuracy. To address this, we propose the Retriever-Dictionary (RD) moduleβ€”a lightweight, retrievable external explicit knowledge dictionary that integrates insights from vision models, large language models, and multimodal models. RD is embedded into one-stage detectors to enable dynamic, feature-level knowledge retrieval and fusion. It is plug-and-play, compatible with mainstream architectures including YOLO variants, Faster R-CNN, and Deformable DETR, incurs <1% additional parameters, and supports cross-task (detection, segmentation, classification) and cross-paradigm generalization. On object detection benchmarks, RD improves mAP by over 3%, while also substantially enhancing segmentation and classification performance. This work establishes a novel paradigm for incorporating dataset-level prior knowledge into detection models.

Technology Category

Application Category

πŸ“ Abstract
Identifying and localizing objects within images is a fundamental challenge, and numerous efforts have been made to enhance model accuracy by experimenting with diverse architectures and refining training strategies. Nevertheless, a prevalent limitation in existing models is overemphasizing the current input while ignoring the information from the entire dataset. We introduce an innovative Retriever-Dictionary (RD) module to address this issue. This architecture enables YOLO-based models to efficiently retrieve features from a Dictionary that contains the insight of the dataset, which is built by the knowledge from Visual Models (VM), Large Language Models (LLM), or Visual Language Models (VLM). The flexible RD enables the model to incorporate such explicit knowledge that enhances the ability to benefit multiple tasks, specifically, segmentation, detection, and classification, from pixel to image level. The experiments show that using the RD significantly improves model performance, achieving more than a 3% increase in mean Average Precision for object detection with less than a 1% increase in model parameters. Beyond 1-stage object detection models, the RD module improves the effectiveness of 2-stage models and DETR-based architectures, such as Faster R-CNN and Deformable DETR. Code is released at https://github.com/henrytsui000/YOLO.
Problem

Research questions and friction points this paper is trying to address.

Enhance object detection accuracy
Incorporate dataset-wide explicit knowledge
Improve segmentation and classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retriever-Dictionary module
Integrates Visual and Language Models
Enhances YOLO for multiple tasks
πŸ”Ž Similar Papers
No similar papers found.
H
Hao-Tang Tsui
Institute of Information Science, Academia Sinica
Chien-Yao Wang
Chien-Yao Wang
Institute of Information Science, Academia Sinica
H
Hongpeng Liao
Institute of Information Science, Academia Sinica