A Cautionary Tale on Integrating Studies with Disparate Outcome Measures for Causal Inference

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the efficiency of data integration in multi-source causal studies when outcome measures exhibit heterogeneity (e.g., SOW vs. COW scales). To tackle the lack of comparability across measurement instruments, we systematically formulate three nested cross-scale association assumptions and theoretically characterize their efficiency–bias trade-offs: only the strongest assumption guarantees asymptotic efficiency gains, yet it is highly sensitive to misspecification; weaker assumptions may yield finite-sample improvements but their benefits vanish asymptotically. Leveraging semiparametric estimation, sensitivity analysis, and simulation studies—and empirically illustrating with the XBOT/POAT trial integration—we provide the first rigorous demonstration that commonly adopted integration strategies incur severe bias under assumption misspecification. Our core contribution is a theoretical framework for data integration under outcome measurement heterogeneity, which precisely delineates the necessary conditions for efficiency gains and quantifies associated risks, thereby establishing a methodological benchmark for synthesizing multi-source evidence.

Technology Category

Application Category

📝 Abstract
Data integration approaches are increasingly used to enhance the efficiency and generalizability of studies. However, a key limitation of these methods is the assumption that outcome measures are identical across datasets -- an assumption that often does not hold in practice. Consider the following opioid use disorder (OUD) studies: the XBOT trial and the POAT study, both evaluating the effect of medications for OUD on withdrawal symptom severity (not the primary outcome of either trial). While XBOT measures withdrawal severity using the subjective opiate withdrawal scale, POAT uses the clinical opiate withdrawal scale. We analyze this realistic yet challenging setting where outcome measures differ across studies and where neither study records both types of outcomes. Our paper studies whether and when integrating studies with disparate outcome measures leads to efficiency gains. We introduce three sets of assumptions -- with varying degrees of strength -- linking both outcome measures. Our theoretical and empirical results highlight a cautionary tale: integration can improve asymptotic efficiency only under the strongest assumption linking the outcomes. However, misspecification of this assumption leads to bias. In contrast, a milder assumption may yield finite-sample efficiency gains, yet these benefits diminish as sample size increases. We illustrate these trade-offs via a case study integrating the XBOT and POAT datasets to estimate the comparative effect of two medications for opioid use disorder on withdrawal symptoms. By systematically varying the assumptions linking the SOW and COW scales, we show potential efficiency gains and the risks of bias. Our findings emphasize the need for careful assumption selection when fusing datasets with differing outcome measures, offering guidance for researchers navigating this common challenge in modern data integration.
Problem

Research questions and friction points this paper is trying to address.

Examining efficiency gains from integrating studies with different outcome measures
Analyzing bias risks under varying assumptions linking disparate outcomes
Evaluating medication effects on opioid withdrawal using incompatible datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces three sets of linking assumptions
Analyzes efficiency gains under strong assumptions
Highlights bias risks in assumption misspecification
🔎 Similar Papers
No similar papers found.