🤖 AI Summary
Randomized controlled trials (RCTs), the gold standard for causal inference, often suffer from limited statistical power due to high costs and small sample sizes; moreover, rich unstructured auxiliary data (e.g., text, speech) collected in RCTs—though highly predictive—are underutilized for causal estimation. To address this, we propose CALM, the first framework to safely integrate large language models (LLMs) into RCT-based causal analysis. CALM combines LLM-predicted auxiliary variables via heterogeneity-aware calibration and residualization-based reweighting, preserving unbiasedness under standard assumptions. It further introduces a U-statistic–inspired few-shot ensemble and bias-correction mechanism to mitigate prompt variability and enhance estimation consistency and stability. In simulations of a depression intervention trial, CALM substantially reduces estimator variance, outperforming baseline methods in both zero-shot and few-shot settings, while demonstrating strong robustness across diverse prompts.
📝 Abstract
Randomized experiments or randomized controlled trials (RCTs) are gold standards for causal inference, yet cost and sample-size constraints limit power. Meanwhile, modern RCTs routinely collect rich, unstructured data that are highly prognostic of outcomes but rarely used in causal analyses. We introduce CALM (Causal Analysis leveraging Language Models), a statistical framework that integrates large language models (LLMs) predictions with established causal estimators to increase precision while preserving statistical validity. CALM treats LLM outputs as auxiliary prognostic information and corrects their potential bias via a heterogeneous calibration step that residualizes and optimally reweights predictions. We prove that CALM remains consistent even when LLM predictions are biased and achieves efficiency gains over augmented inverse probability weighting estimators for various causal effects. In particular, CALM develops a few-shot variant that aggregates predictions across randomly sampled demonstration sets. The resulting U-statistic-like predictor restores i.i.d. structure and also mitigates prompt-selection variability. Empirically, in simulations calibrated to a mobile-app depression RCT, CALM delivers lower variance relative to other benchmarking methods, is effective in zero- and few-shot settings, and remains stable across prompt designs. By principled use of LLMs to harness unstructured data and external knowledge learned during pretraining, CALM provides a practical path to more precise causal analyses in RCTs.