UAVBench: An Open Benchmark Dataset for Autonomous and Agentic AI UAV Systems via LLM-Generated Flight Scenarios

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation lacks standardized, physically grounded benchmarks tailored for autonomous unmanned aerial vehicles (UAVs). Method: We introduce UAVBench—the first open-source, multi-stage safety-validated UAV reasoning benchmark—comprising 50,000 structured (JSON) flight scenarios covering ten cognitive capabilities, including perception, policy reasoning, ethical judgment, and resource-constrained decision-making. It employs a novel taxonomy-guided LLM generation paradigm integrated with physics-based constraint verification, multi-level safety filtering, and expert human validation to ensure scenario fidelity and interpretability; we also release UAVBench_MCQ, a multiple-choice question subset. Contribution/Results: Systematic evaluation across 32 state-of-the-art LLMs reveals robust performance in perception and policy reasoning, but significant deficiencies in ethical reasoning and low-resource decision-making. UAVBench establishes a reproducible, verifiable evaluation infrastructure and standardized methodology for advancing UAV autonomous intelligence.

Technology Category

Application Category

📝 Abstract
Autonomous aerial systems increasingly rely on large language models (LLMs) for mission planning, perception, and decision-making, yet the lack of standardized and physically grounded benchmarks limits systematic evaluation of their reasoning capabilities. To address this gap, we introduce UAVBench, an open benchmark dataset comprising 50,000 validated UAV flight scenarios generated through taxonomy-guided LLM prompting and multi-stage safety validation. Each scenario is encoded in a structured JSON schema that includes mission objectives, vehicle configuration, environmental conditions, and quantitative risk labels, providing a unified representation of UAV operations across diverse domains. Building on this foundation, we present UAVBench_MCQ, a reasoning-oriented extension containing 50,000 multiple-choice questions spanning ten cognitive and ethical reasoning styles, ranging from aerodynamics and navigation to multi-agent coordination and integrated reasoning. This framework enables interpretable and machine-checkable assessment of UAV-specific cognition under realistic operational contexts. We evaluate 32 state-of-the-art LLMs, including GPT-5, ChatGPT-4o, Gemini 2.5 Flash, DeepSeek V3, Qwen3 235B, and ERNIE 4.5 300B, and find strong performance in perception and policy reasoning but persistent challenges in ethics-aware and resource-constrained decision-making. UAVBench establishes a reproducible and physically grounded foundation for benchmarking agentic AI in autonomous aerial systems and advancing next-generation UAV reasoning intelligence. To support open science and reproducibility, we release the UAVBench dataset, the UAVBench_MCQ benchmark, evaluation scripts, and all related materials on GitHub at https://github.com/maferrag/UAVBench
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of standardized benchmarks for autonomous UAV systems
Evaluates LLM reasoning capabilities in realistic flight scenarios
Assesses cognitive and ethical decision-making in aerial operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates UAV flight scenarios via LLM prompting
Encodes scenarios in structured JSON with risk labels
Evaluates LLM reasoning with multiple-choice questions
🔎 Similar Papers
No similar papers found.
M
M. Ferrag
Department of Computer and Network Engineering, College of Information Technology, United Arab Emirates University, Al Ain, United Arab Emirates
Abderrahmane Lakas
Abderrahmane Lakas
Professor, Computer Engineering, UAE University
Mobile NetworksVehicular NetworksIoTUnmanned VehiclesAI
M
M. Debbah
Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates