π€ AI Summary
This work addresses a critical limitation in existing agent-based AutoML systems, which rely solely on final performance metrics and lack structured evaluation of intermediate decision processes, thereby hindering root-cause diagnosis of failures. To overcome this, the authors propose an Evaluator Agent (EA)βa non-intrusive, observer-based framework that introduces, for the first time, a centralized assessment of AutoML decisions without interfering with system execution. EA leverages a large language modelβdriven architecture to enable interpretable and traceable auditing across four dimensions: decision validity, reasoning consistency, model quality risk, and counterfactual impact. Through counterfactual analysis and multi-dimensional decision quality evaluation, EA achieves an F1 score of 0.919 in accurately detecting erroneous decisions across four experiments, identifies reasoning inconsistencies uncorrelated with final performance, and quantifies the individual impact of decisions on downstream outcomes, ranging from β4.9% to +8.3%.
π Abstract
Agent-based AutoML systems rely on large language models to make complex, multi-stage decisions across data processing, model selection, and evaluation. However, existing evaluation practices remain outcome-centric, focusing primarily on final task performance. Through a review of prior work, we find that none of the surveyed agentic AutoML systems report structured, decision-level evaluation metrics intended for post-hoc assessment of intermediate decision quality. To address this limitation, we propose an Evaluation Agent (EA) that performs decision-centric assessment of AutoML agents without interfering with their execution. The EA is designed as an observer that evaluates intermediate decisions along four dimensions: decision validity, reasoning consistency, model quality risks beyond accuracy, and counterfactual decision impact. Across four proof-of-concept experiments, we demonstrate that the EA can (i) detect faulty decisions with an F1 score of 0.919, (ii) identify reasoning inconsistencies independent of final outcomes, and (iii) attribute downstream performance changes to agent decisions, revealing impacts ranging from -4.9\% to +8.3\% in final metrics. These results illustrate how decision-centric evaluation exposes failure modes that are invisible to outcome-only metrics. Our work reframes the evaluation of agentic AutoML systems from an outcome-based perspective to one that audits agent decisions, offering a foundation for reliable, interpretable, and governable autonomous ML systems.