🤖 AI Summary
Existing LLM evaluation lacks standardized, physically grounded benchmarks tailored for autonomous unmanned aerial vehicles (UAVs). Method: We introduce UAVBench—the first open-source, multi-stage safety-validated UAV reasoning benchmark—comprising 50,000 structured (JSON) flight scenarios covering ten cognitive capabilities, including perception, policy reasoning, ethical judgment, and resource-constrained decision-making. It employs a novel taxonomy-guided LLM generation paradigm integrated with physics-based constraint verification, multi-level safety filtering, and expert human validation to ensure scenario fidelity and interpretability; we also release UAVBench_MCQ, a multiple-choice question subset. Contribution/Results: Systematic evaluation across 32 state-of-the-art LLMs reveals robust performance in perception and policy reasoning, but significant deficiencies in ethical reasoning and low-resource decision-making. UAVBench establishes a reproducible, verifiable evaluation infrastructure and standardized methodology for advancing UAV autonomous intelligence.
📝 Abstract
Autonomous aerial systems increasingly rely on large language models (LLMs) for mission planning, perception, and decision-making, yet the lack of standardized and physically grounded benchmarks limits systematic evaluation of their reasoning capabilities. To address this gap, we introduce UAVBench, an open benchmark dataset comprising 50,000 validated UAV flight scenarios generated through taxonomy-guided LLM prompting and multi-stage safety validation. Each scenario is encoded in a structured JSON schema that includes mission objectives, vehicle configuration, environmental conditions, and quantitative risk labels, providing a unified representation of UAV operations across diverse domains. Building on this foundation, we present UAVBench_MCQ, a reasoning-oriented extension containing 50,000 multiple-choice questions spanning ten cognitive and ethical reasoning styles, ranging from aerodynamics and navigation to multi-agent coordination and integrated reasoning. This framework enables interpretable and machine-checkable assessment of UAV-specific cognition under realistic operational contexts. We evaluate 32 state-of-the-art LLMs, including GPT-5, ChatGPT-4o, Gemini 2.5 Flash, DeepSeek V3, Qwen3 235B, and ERNIE 4.5 300B, and find strong performance in perception and policy reasoning but persistent challenges in ethics-aware and resource-constrained decision-making. UAVBench establishes a reproducible and physically grounded foundation for benchmarking agentic AI in autonomous aerial systems and advancing next-generation UAV reasoning intelligence. To support open science and reproducibility, we release the UAVBench dataset, the UAVBench_MCQ benchmark, evaluation scripts, and all related materials on GitHub at https://github.com/maferrag/UAVBench