Learning Unknown Interdependencies for Decentralized Root Cause Analysis in Nonlinear Dynamical Systems

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of root cause analysis in decentralized industrial networks where system dependency graphs are unavailable, and IoT time-series data are nonlinear, high-dimensional, and heterogeneous. The authors propose a federated cross-client interdependency learning method that models unknown dynamic dependencies among clients through lightweight auxiliary modules, without accessing raw data or modifying local private models. A global server coordinates these modules to maintain representation consistency across clients. This approach relaxes conventional federated learning assumptions of homogeneous feature spaces and retrainable models, enabling privacy-preserving root cause analysis in heterogeneous settings with feature-partitioned data and fixed local models. Theoretical analysis establishes algorithmic convergence, and extensive experiments on large-scale synthetic and real-world industrial cybersecurity datasets demonstrate its effectiveness.

Technology Category

Application Category

📝 Abstract
Root cause analysis (RCA) in networked industrial systems, such as supply chains and power networks, is notoriously difficult due to unknown and dynamically evolving interdependencies among geographically distributed clients. These clients represent heterogeneous physical processes and industrial assets equipped with sensors that generate large volumes of nonlinear, high-dimensional, and heterogeneous IoT data. Classical RCA methods require partial or full knowledge of the system's dependency graph, which is rarely available in these complex networks. While federated learning (FL) offers a natural framework for decentralized settings, most existing FL methods assume homogeneous feature spaces and retrainable client models. These assumptions are not compatible with our problem setting. Different clients have different data features and often run fixed, proprietary models that cannot be modified. This paper presents a federated cross-client interdependency learning methodology for feature-partitioned, nonlinear time-series data, without requiring access to raw sensor streams or modifying proprietary client models. Each proprietary local client model is augmented with a Machine Learning (ML) model that encodes cross-client interdependencies. These ML models are coordinated via a global server that enforces representation consistency while preserving privacy through calibrated differential privacy noise. RCA is performed using model residuals and anomaly flags. We establish theoretical convergence guarantees and validate our approach on extensive simulations and a real-world industrial cybersecurity dataset.
Problem

Research questions and friction points this paper is trying to address.

root cause analysis
nonlinear dynamical systems
unknown interdependencies
decentralized systems
federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

federated learning
root cause analysis
cross-client interdependency
differential privacy
nonlinear dynamical systems