CausalGraph2LLM: Evaluating LLMs for Causal Queries

๐Ÿ“… 2024-10-21
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) exhibit sensitivity to causal graph encoding formats and lack well-defined performance boundaries in causal reasoning. Method: We introduce CausalGraph2LLMโ€”the first large-scale causal graph benchmark comprising over 700,000 dual-granularity queriesโ€”and systematically evaluate LLMs on both graph-level and node-level causal tasks. We propose a DAG-based dual-granularity query classification framework and conduct extensive experiments across open- and closed-weight models under diverse graph encoding strategies. Contribution/Results: Our study quantifies, for the first time, substantial encoding sensitivity in mainstream LLMs (e.g., GPT-4, Gemini-1.5), with performance deviations up to 60%. We further identify significant parametric memory biases in interventional and contextual causal inference. These findings establish a reproducible evaluation paradigm for trustworthy causal AI and provide critical diagnostics of key failure modes.

Technology Category

Application Category

๐Ÿ“ Abstract
Causality is essential in scientific research, enabling researchers to interpret true relationships between variables. These causal relationships are often represented by causal graphs, which are directed acyclic graphs. With the recent advancements in Large Language Models (LLMs), there is an increasing interest in exploring their capabilities in causal reasoning and their potential use to hypothesize causal graphs. These tasks necessitate the LLMs to encode the causal graph effectively for subsequent downstream tasks. In this paper, we introduce CausalGraph2LLM, a comprehensive benchmark comprising over 700k queries across diverse causal graph settings to evaluate the causal reasoning capabilities of LLMs. We categorize the causal queries into two types: graph-level and node-level queries. We benchmark both open-sourced and propriety models for our study. Our findings reveal that while LLMs show promise in this domain, they are highly sensitive to the encoding used. Even capable models like GPT-4 and Gemini-1.5 exhibit sensitivity to encoding, with deviations of about $60%$. We further demonstrate this sensitivity for downstream causal intervention tasks. Moreover, we observe that LLMs can often display biases when presented with contextual information about a causal graph, potentially stemming from their parametric memory.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs for causal reasoning
Assessing encoding sensitivity in causal graphs
Identifying biases in LLMs' causal interpretations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for causal reasoning
700k diverse causal queries
Sensitivity to graph encoding
๐Ÿ”Ž Similar Papers
No similar papers found.