LLM-as-Judge for Semantic Judging of Powerline Segmentation in UAV Inspection

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of lightweight segmentation models to environmental variations in UAV-based power line inspection, which can yield unreliable outputs and pose safety risks. To enhance system robustness, the study introduces a large language model (LLM) as a semantic referee within an offline monitoring framework to evaluate the reliability of segmentation results. The proposed approach integrates image-overlay inputs, fixed prompt engineering, and controlled visual perturbations—such as fog, rain, and snow—to establish a dual evaluation protocol based on reproducibility and perceptual sensitivity. Experimental results demonstrate that the LLM exhibits high consistency under identical inputs, appropriately reduces confidence as visual quality degrades, and remains sensitive to missing or falsely detected power lines, thereby significantly improving the safety and reliability of autonomous inspection systems.
📝 Abstract
The deployment of lightweight segmentation models on drones for autonomous power line inspection presents a critical challenge: maintaining reliable performance under real-world conditions that differ from training data. Although compact architectures such as U-Net enable real-time onboard inference, their segmentation outputs can degrade unpredictably in adverse environments, raising safety concerns. In this work, we study the feasibility of using a large language model (LLM) as a semantic judge to assess the reliability of power line segmentation results produced by drone-mounted models. Rather than introducing a new inspection system, we formalize a watchdog scenario in which an offboard LLM evaluates segmentation overlays and examine whether such a judge can be trusted to behave consistently and perceptually coherently. To this end, we design two evaluation protocols that analyze the judge's repeatability and sensitivity. First, we assess repeatability by repeatedly querying the LLM with identical inputs and fixed prompts, measuring the stability of its quality scores and confidence estimates. Second, we evaluate perceptual sensitivity by introducing controlled visual corruptions (fog, rain, snow, shadow, and sunflare) and analyzing how the judge's outputs respond to progressive degradation in segmentation quality. Our results show that the LLM produces highly consistent categorical judgments under identical conditions while exhibiting appropriate declines in confidence as visual reliability deteriorates. Moreover, the judge remains responsive to perceptual cues such as missing or misidentified power lines, even under challenging conditions. These findings suggest that, when carefully constrained, an LLM can serve as a reliable semantic judge for monitoring segmentation quality in safety-critical aerial inspection tasks.
Problem

Research questions and friction points this paper is trying to address.

power line segmentation
UAV inspection
semantic reliability
LLM-as-Judge
segmentation quality assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-as-Judge
semantic evaluation
power line segmentation
UAV inspection
perceptual sensitivity