From Features to Actions: Explainability in Traditional and Agentic AI Systems

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of existing explainable AI methods, which predominantly focus on static predictions and struggle to diagnose failures in agent behaviors driven by multi-step decision trajectories. The work proposes a trajectory-oriented explainability framework and presents the first systematic comparison between explanation mechanisms for static models and agent systems. Leveraging feature attribution techniques—such as Spearman correlation analysis—and trajectory-based scoring criteria, the authors empirically evaluate their approach on TAU-bench Airline and AssistantBench. Results demonstrate that while static attribution methods remain stable in classification tasks (ρ = 0.86), they fail to effectively pinpoint agent failures. In contrast, trajectory-level analysis reveals inconsistent state tracking as a critical failure factor: it occurs 2.7 times more frequently in failed runs and correlates with a 49% drop in task success rates.

Technology Category

Application Category

📝 Abstract
Over the last decade, explainable AI has primarily focused on interpreting individual model predictions, producing post-hoc explanations that relate inputs to outputs under a fixed decision structure. Recent advances in large language models (LLMs) have enabled agentic AI systems whose behaviour unfolds over multi-step trajectories. In these settings, success and failure are determined by sequences of decisions rather than a single output. While useful, it remains unclear how explanation approaches designed for static predictions translate to agentic settings where behaviour emerges over time. In this work, we bridge the gap between static and agentic explainability by comparing attribution-based explanations with trace-based diagnostics across both settings. To make this distinction explicit, we empirically compare attribution-based explanations used in static classification tasks with trace-based diagnostics used in agentic benchmarks (TAU-bench Airline and AssistantBench). Our results show that while attribution methods achieve stable feature rankings in static settings (Spearman $\rho = 0.86$), they cannot be applied reliably to diagnose execution-level failures in agentic trajectories. In contrast, trace-grounded rubric evaluation for agentic settings consistently localizes behaviour breakdowns and reveals that state tracking inconsistency is 2.7$\times$ more prevalent in failed runs and reduces success probability by 49\%. These findings motivate a shift towards trajectory-level explainability for agentic systems when evaluating and diagnosing autonomous AI behaviour. Resources: https://github.com/VectorInstitute/unified-xai-evaluation-framework https://vectorinstitute.github.io/unified-xai-evaluation-framework
Problem

Research questions and friction points this paper is trying to address.

Explainable AI
Agentic AI
Trajectory-level Explainability
Attribution Methods
Execution-level Failures
Innovation

Methods, ideas, or system contributions that make the work stand out.

explainable AI
agentic systems
trace-based diagnostics
attribution methods
trajectory-level explainability
🔎 Similar Papers
No similar papers found.
S
Sindhuja Chaduvula
Vector Institute for Artificial Intelligence, Toronto, Canada
J
Jessee Ho
Vector Institute for Artificial Intelligence, Toronto, Canada
K
Kina Kim
Independent Researcher
Aravind Narayanan
Aravind Narayanan
Associate Applied ML Specialist, Vector Institute || Master's Student, University of Toronto
LLMsComputer VisionVLMs
M
Mahshid Alinoori
Vector Institute for Artificial Intelligence, Toronto, Canada
M
Muskan Garg
Mayo Clinic, Rochester, MN, USA
Dhanesh Ramachandram
Dhanesh Ramachandram
Vector Institute
Deep LearningComputer VisionMachine LearningPattern Recognition
S
Shaina Raza
Vector Institute for Artificial Intelligence, Toronto, Canada