CARE-ECG: Causal Agent-based Reasoning for Explainable and Counterfactual ECG Interpretation

📅 2026-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing ECG-LLM systems suffer from weak signal-text alignment and limited temporal reasoning due to the absence of explicit physiological and causal structures, hindering clinical interpretability and counterfactual analysis. This work proposes the first ECG language reasoning framework that integrates structural causal models with a modular agent architecture. By extracting latent biomarkers through multi-lead temporal encoding and combining causal graph inference with causally informed retrieval-augmented generation, the framework produces traceable and verifiable diagnostic explanations. Evaluated on the Expert-ECG-QA and SCP-mapped PTB-XL benchmarks, the method achieves accuracies of 0.84 and 0.76, respectively, significantly improving diagnostic accuracy, explanation fidelity, and effective hallucination suppression.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) enable waveform-to-text ECG interpretation and interactive clinical questioning, yet most ECG-LLM systems still rely on weak signal-text alignment and retrieval without explicit physiological or causal structure. This limits grounding, temporal reasoning, and counterfactual "what-if" analysis central to clinical decision-making. We propose CARE-ECG, a causally structured ECG-language reasoning framework that unifies representation learning, diagnosis, and explanation in a single pipeline. CARE-ECG encodes multi-lead ECGs into temporally organized latent biomarkers, performs causal graph inference for probabilistic diagnosis, and supports counterfactual assessment via structural causal models. To improve faithfulness, CARE-ECG grounds language outputs through causal retrieval-augmented generation and a modular agentic pipeline that integrates history, diagnosis, and response with verification. Across multiple ECG benchmarks and expert QA settings, CARE-ECG improves diagnostic accuracy and explanation faithfulness while reducing hallucinations (e.g., 0.84 accuracy on Expert-ECG-QA and 0.76 on SCP-mapped PTB-XL under GPT-4). Overall, CARE-ECG provides traceable reasoning by exposing key latent drivers, causal evidence paths, and how alternative physiological states would change outcomes.
Problem

Research questions and friction points this paper is trying to address.

ECG interpretation
causal reasoning
counterfactual analysis
explainable AI
clinical decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal reasoning
counterfactual analysis
ECG interpretation
retrieval-augmented generation
structural causal model
🔎 Similar Papers
No similar papers found.