Measuring LLM Trust Allocation Across Conflicting Software Artifacts

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models (LLMs) struggle to accurately calibrate trust across software artifacts—such as code, documentation, and tests—when inconsistencies arise, a nuance overlooked by existing evaluation methods. To systematically quantify LLMs’ trust mechanisms in multi-source software conflicts, we propose TRACE, a framework featuring blind perturbation generation, structured trust trajectory collection, and multidimensional assessment encompassing quality judgment, inconsistency detection, attribution, and prioritization. Experiments across seven models and 22,339 human-validated trajectories reveal that while models effectively identify explicit documentation errors (67–94% accuracy), their detection performance drops substantially—by 7 to 42 percentage points—when only implementation drift is present. Moreover, models consistently exhibit poor confidence calibration in such scenarios.
📝 Abstract
LLM-based software engineering assistants fail not only by producing incorrect outputs, but also by allocating trust to the wrong artifact when code, documentation, and tests disagree. Existing evaluations focus mainly on downstream outcomes and therefore cannot reveal whether a model recognized degraded evidence, identified the unreliable source, or calibrated its trust across artifacts. We present TRACE (Trust Reasoning over Artifacts for Calibrated Evaluation), a framework that elicits structured artifact-level trust traces over Javadoc, method signatures, implementations, and test prefixes under blind perturbations. Using 22,339 valid traces from seven models on 456 curated Java method bundles, we evaluate per-artifact quality assessment, inconsistency detection, affected artifact attribution, and source prioritization. Across all models, quality penalties are largely localized to the perturbed artifact and increase with severity, but sensitivity is asymmetric across artifact types: documentation bugs induce a substantially larger heavy-to-subtle gap than implementation faults (0.152-0.253 vs. 0.049-0.123). Models detect explicit documentation bugs well (67-94%) and Javadoc and implementation contradictions at 50-91%, yet show a systematic blind spot when only the implementation drifts while the documentation remains plausible, with detection dropping by 7-42 percentage points. Confidence is poorly calibrated for six of seven models. These findings suggest that current LLMs are better at auditing natural-language specifications than at detecting subtle code-level drift, motivating explicit artifact-level trust reasoning before correctness-critical downstream use.
Problem

Research questions and friction points this paper is trying to address.

trust allocation
conflicting software artifacts
LLM evaluation
artifact-level reasoning
calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

trust allocation
artifact-level evaluation
LLM reliability
software artifacts
calibrated reasoning
🔎 Similar Papers
No similar papers found.