🤖 AI Summary
Causal discovery is critical for studying complex systems such as biological networks, yet existing methods (e.g., PC, NOTEARS) suffer from restrictive linear assumptions, weak directionality identification, sensitivity to unfaithfulness, and low search efficiency. While large language models (LLMs) exhibit strong reasoning capabilities, they are ill-suited for tabular causal data. This paper introduces the first LLM-adaptation framework specifically designed for tabular data, integrating local causal scoring, conditional independence testing, and relational attribute modeling to enable fine-grained identification of nonlinear causal mechanisms. Leveraging a lightweight Mamba-based architecture, the framework is jointly optimized on both synthetic and real-world biological datasets. Experiments demonstrate superior performance: >91% accuracy on synthetic benchmarks and successful identification of key causal drivers in hepatitis C virus progression—outperforming state-of-the-art baselines significantly.
📝 Abstract
Causal discovery from observational data is fundamental to scientific fields like biology, where controlled experiments are often impractical. However, existing methods, including constraint-based (e.g., PC, causalMGM) and score-based approaches (e.g., NOTEARS), face significant limitations. These include an inability to resolve causal direction, restrictions to linear associations, sensitivity to violations of the faithfulness assumption, and inefficiency in searching vast hypothesis spaces. While large language models (LLMs) offer powerful reasoning capabilities, their application is hindered by a fundamental discrepancy: they are designed for text, while most causal data is tabular. To address these challenges, we introduce CALM, a novel causal analysis language model specifically designed for tabular data in complex systems. CALM leverages a Mamba-based architecture to classify causal patterns from pairwise variable relationships. It integrates a comprehensive suite of evidence, including local causal scores, conditional independence tests, and relational attributes, to capture a wide spectrum of linear, nonlinear, and conditional causal mechanisms. Trained on a diverse corpus of synthetic data (from linear, mixed, and nonlinear models) and 10 real-world biological datasets with rigorously validated causal relationships, our model ensures robustness and generalizability. Empirical evaluation demonstrates that CALM significantly outperforms existing methods in both simulation studies, achieving over 91% accuracy, and in a real-world application identifying causal factors in Hepatitis C virus progression. This work represents a significant step towards accurate and generalizable causal discovery by successfully adapting the pattern recognition capabilities of language models to the intricacies of tabular data.