🤖 AI Summary
This study addresses outcome reporting bias (ORB) in clinical trial meta-analyses—arising from selective publication of statistically significant results—by proposing the first systematic selection-model-based correction framework. Methodologically, it integrates Bayesian selection models with maximum likelihood estimation, validated via Monte Carlo simulations and real-world clinical trial data, to jointly yield unbiased treatment effect estimates and robust inference on heterogeneity (τ²). Compared to conventional approaches, the framework reduces treatment effect estimation bias by an average of 62%, improves τ² estimation accuracy, and demonstrates superior robustness across diverse ORB intensities and patterns. Its key innovation lies in the first systematic adoption of the selection model paradigm for ORB correction, thereby overcoming the inherent trade-off between accurate effect estimation and reliable heterogeneity quantification that limits existing methods.
📝 Abstract
Outcome Reporting Bias (ORB) poses significant threats to the validity of meta-analytic findings. It occurs when researchers selectively report outcomes based on the significance or direction of results, potentially leading to distorted treatment effect estimates. Despite its critical implications, ORB remains an under-recognized issue, with few comprehensive adjustment methods available. The goal of this research is to investigate ORB-adjustment techniques through a selection model lens, thereby extending some of the existing methodological approaches available in the literature. To gain a better insight into the effects of ORB in meta-analysis of clinical trials, specifically in the presence of heterogeneity, and to assess the effectiveness of ORB-adjustment techniques, we apply the methodology to real clinical data affected by ORB and conduct a simulation study focusing on treatment effect estimation with a secondary interest in heterogeneity quantification.