🤖 AI Summary
Existing table-log debugging approaches over-rely on large language models (LLMs), suffering from limited flexibility and poor scalability. Method: This paper proposes a lightweight dynamic graph modeling framework. Its core innovation is the first formulation of static tabular logs as an object-event heterogeneous dynamic graph, where temporal edges and heterogeneous node representations enable interpretable reconstruction of system states; a lightweight dynamic graph neural network (GNN) is further designed to avoid high computational overhead. Contribution/Results: Evaluated on real-system and academic log datasets, our method achieves higher inconsistency detection accuracy than state-of-the-art LLMs while reducing inference cost by two orders of magnitude. It significantly improves debugging efficiency, flexibility, and scalability without sacrificing interpretability.
📝 Abstract
Tabular log abstracts objects and events in the real-world system and reports their updates to reflect the change of the system, where one can detect real-world inconsistencies efficiently by debugging corresponding log entries. However, recent advances in processing text-enriched tabular log data overly depend on large language models (LLMs) and other heavy-load models, thus suffering from limited flexibility and scalability. This paper proposes a new framework, GraphLogDebugger, to debug tabular log based on dynamic graphs. By constructing heterogeneous nodes for objects and events and connecting node-wise edges, the framework recovers the system behind the tabular log as an evolving dynamic graph. With the help of our dynamic graph modeling, a simple dynamic Graph Neural Network (GNN) is representative enough to outperform LLMs in debugging tabular log, which is validated by experimental results on real-world log datasets of computer systems and academic papers.