🤖 AI Summary
Estimating causal effects from unstructured text—such as clinical notes and street-outreach records—is challenging due to high expert annotation costs and noisy large language model (LLM) predictions. Method: We propose a budget-constrained, two-stage adaptive labeling strategy that dynamically allocates scarce expert resources, integrating a small set of high-quality expert annotations with abundant, scalable but noisy LLM annotations to improve average treatment effect (ATE) estimation. Contribution/Results: Our approach uniquely unifies missing-outcome causal inference frameworks with mixed-label data modeling, incorporating design-based causal estimators, Bayesian experimental design, and statistical learning theory. Evaluated on synthetic and real-world street-outreach data, it reduces ATE estimation error by 37%–52% relative to full-expert, full-LLM, and weighted baselines, achieving asymptotically optimal variance.
📝 Abstract
Estimating the causal effects of an intervention on outcomes is crucial. But often in domains such as healthcare and social services, this critical information about outcomes is documented by unstructured text, e.g. clinical notes in healthcare or case notes in social services. For example, street outreach to homeless populations is a common social services intervention, with ambiguous and hard-to-measure outcomes. Outreach workers compile case note records which are informative of outcomes. Although experts can succinctly extract relevant information from such unstructured case notes, it is costly or infeasible to do so for an entire corpus, which can span millions of notes. Recent advances in large language models (LLMs) enable scalable but potentially inaccurate annotation of unstructured text data. We leverage the decision of which datapoints should receive expert annotation vs. noisy imputation under budget constraints in a"design-based"estimator combining limited expert and plentiful noisy imputation data via extit{causal inference with missing outcomes}. We develop a two-stage adaptive algorithm that optimizes the expert annotation probabilities, estimating the ATE with optimal asymptotic variance. We demonstrate how expert labels and LLM annotations can be combined strategically, efficiently and responsibly in a causal estimator. We run experiments on simulated data and two real-world datasets, including one on street outreach, to show the versatility of our proposed method.