A Simple Aerial Detection Baseline of Multimodal Language Models

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal language models (MLMs) exhibit limited performance in aerial remote sensing object detection, particularly in comprehending multi-class rotated objects in sky-view imagery. This work pioneers the application of MLMs to aerial detection, proposing LMMRotate—a benchmark framework featuring a learnable textual encoding for rotated bounding boxes (parameterized by center coordinates, width, height, and orientation angle), enabling unified autoregressive sequence generation for detection outputs. We establish a fair evaluation protocol comparing MLMs with conventional rotated detectors (e.g., Rotated Faster R-CNN), adopting Rotated DETR-style metrics. Fine-tuning open-source MLMs—including LLaVA and Qwen-VL—on DOTA-v1.5 achieves mAP competitive with state-of-the-art rotated detectors, demonstrating the feasibility of unifying remote sensing image understanding and detection via MLMs. Key contributions include: (1) the first MLM-based aerial detection paradigm; (2) a learnable textual encoding mechanism for rotated boxes; and (3) a standardized evaluation protocol for fair cross-paradigm benchmarking.

Technology Category

Application Category

📝 Abstract
The multimodal language models (MLMs) based on generative pre-trained Transformer are considered powerful candidates for unifying various domains and tasks. MLMs developed for remote sensing (RS) have demonstrated outstanding performance in multiple tasks, such as visual question answering and visual grounding. In addition to visual grounding that detects specific objects corresponded to given instruction, aerial detection, which detects all objects of multiple categories, is also a valuable and challenging task for RS foundation models. However, aerial detection has not been explored by existing RS MLMs because the autoregressive prediction mechanism of MLMs differs significantly from the detection outputs. In this paper, we present a simple baseline for applying MLMs to aerial detection for the first time, named LMMRotate. Specifically, we first introduce a normalization method to transform detection outputs into textual outputs to be compatible with the MLM framework. Then, we propose a evaluation method, which ensures a fair comparison between MLMs and conventional object detection models. We construct the baseline by fine-tuning open-source general-purpose MLMs and achieve impressive detection performance comparable to conventional detector. We hope that this baseline will serve as a reference for future MLM development, enabling more comprehensive capabilities for understanding RS images. Code is available at https://github.com/Li-Qingyun/mllm-mmrotate.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Language Models
Aerial Reconnaissance Tasks
Object Recognition Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LMMRotate
Multimodal Language Model
Aerial Object Detection
🔎 Similar Papers
No similar papers found.
Qingyun Li
Qingyun Li
University of Electronic Science and Technology of China
wireless communicationsinformation theory
Yushi Chen
Yushi Chen
哈尔滨工业大学
Remote sensingmachine learning
X
Xinya Shu
Harbin Institute of Technology
D
Dong Chen
Harbin Institute of Technology
X
Xingyu He
Harbin Institute of Technology
Y
Yi Yu
Southeast University
X
Xue Yang
Shanghai Jiao Tong University