Feature Engineering for Agents: An Adaptive Cognitive Architecture for Interpretable ML Monitoring

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of verbose, low-interpretable, and decision-ineffective outputs in production ML model monitoring, this paper proposes an LLM-based adaptive cognitive architecture. The architecture transfers the “refactor–break down–compile” paradigm from feature engineering to monitoring agent decision-making and pioneers deterministic planning—replacing LLM’s intrinsic, stochastic planning—to substantially improve logical consistency and auditability. Its core components are semantic refactoring, hierarchical decomposition analysis, and multi-granularity compilation and integration, underpinned by a lightweight, interpretability-oriented planning framework. Extensive experiments across diverse domains demonstrate that the method significantly outperforms baseline approaches in monitoring accuracy, while generating outputs with high semantic clarity, strong operational guidance, and robustness across models and tasks.

Technology Category

Application Category

📝 Abstract
Monitoring Machine Learning (ML) models in production environments is crucial, yet traditional approaches often yield verbose, low-interpretability outputs that hinder effective decision-making. We propose a cognitive architecture for ML monitoring that applies feature engineering principles to agents based on Large Language Models (LLMs), significantly enhancing the interpretability of monitoring outputs. Central to our approach is a Decision Procedure module that simulates feature engineering through three key steps: Refactor, Break Down, and Compile. The Refactor step improves data representation to better capture feature semantics, allowing the LLM to focus on salient aspects of the monitoring data while reducing noise and irrelevant information. Break Down decomposes complex information for detailed analysis, and Compile integrates sub-insights into clear, interpretable outputs. This process leads to a more deterministic planning approach, reducing dependence on LLM-generated planning, which can sometimes be inconsistent and overly general. The combination of feature engineering-driven planning and selective LLM utilization results in a robust decision support system, capable of providing highly interpretable and actionable insights. Experiments using multiple LLMs demonstrate the efficacy of our approach, achieving significantly higher accuracy compared to various baselines across several domains.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability of ML monitoring outputs
Reduce noise and irrelevant information in monitoring data
Improve decision-making with deterministic planning approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature engineering principles applied to LLM agents
Decision Procedure module with Refactor, Break Down, Compile
Deterministic planning reduces LLM-generated inconsistency
🔎 Similar Papers
No similar papers found.
G
Gusseppe Bravo-Rocca
Barcelona Supercomputing Center, Barcelona, Spain
P
Peini Liu
Barcelona Supercomputing Center, Barcelona, Spain
Jordi Guitart
Jordi Guitart
Universitat Politècnica de Catalunya (UPC); Barcelona Supercomputing Center (BSC)
Cloud ComputingGreen Computing
R
Rodrigo M Carrillo-Larco
Emory University, Atlanta, GA, USA
A
Ajay Dholakia
Lenovo Infrastructure Solutions Group, Morrisville, NC, USA
D
David Ellison
Lenovo Infrastructure Solutions Group, Morrisville, NC, USA