Assumption-Lean Post-Integrated Inference with Negative Control Outcomes

๐Ÿ“… 2024-10-07
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Post-integration multiple testing is vulnerable to data-dependence biasโ€”particularly under unmeasured confounding, latent mediation, or model misspecification. Method: We propose the first robust post-integration inference framework leveraging negative control outcomes. Our approach introduces a hypothesis-sparse, negative-control-driven paradigm and constructs a projectable direct effect estimator that is nonparametrically identifiable, robust to model misspecification, and tolerant to measurement error. It integrates causal inference, semiparametric statistics, double-robust estimation, and machine learning (e.g., random forests), supported by finite-sample linear expansion and uniform concentration bounds. Contribution/Results: The method achieves consistent and efficient estimation. Applied to single-cell CRISPR perturbation data, it successfully corrects for batch effects and unmeasured confounding. Extensive simulations and real-data analyses demonstrate its statistical superiority over existing methods.

Technology Category

Application Category

๐Ÿ“ Abstract
Data integration methods aim to extract low-dimensional embeddings from high-dimensional outcomes to remove unwanted variations, such as batch effects and unmeasured covariates, across heterogeneous datasets. However, multiple hypothesis testing after integration can be biased due to data-dependent processes. We introduce a robust post-integrated inference (PII) method that adjusts for latent heterogeneity using negative control outcomes. Leveraging causal interpretations, we derive nonparametric identifiability of the direct effects, which motivates our semiparametric inference method. Our method extends to projected direct effect estimands, accounting for hidden mediators, confounders, and moderators. These estimands remain statistically meaningful under model misspecifications and with error-prone embeddings. We provide bias quantifications and finite-sample linear expansions with uniform concentration bounds. The proposed doubly robust estimators are consistent and efficient under minimal assumptions and potential misspecification, facilitating data-adaptive estimation with machine learning algorithms. Our proposal is evaluated with random forests through simulations and analysis of single-cell CRISPR perturbed datasets with potential unmeasured confounders.
Problem

Research questions and friction points this paper is trying to address.

Addresses biased hypothesis testing after data integration
Adjusts for latent heterogeneity using control outcomes
Provides robust inference under model misspecification and errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robust post-integrated inference using control outcomes
Nonparametric identifiability via negative control outcomes
Semiparametric inference with surrogate control outcomes
๐Ÿ”Ž Similar Papers
No similar papers found.