🤖 AI Summary
In robust Bayesian dynamic borrowing with mixture priors, prior weights and the variance of the robust component are often tuned independently, overlooking their interdependent impact on posterior inference. Method: We propose a joint optimization framework that integrates Bayesian mixture modeling, asymptotic analysis, and posterior inference to systematically investigate their synergistic effects. Through theoretical analysis and simulation studies, we identify feasible weight–variance combinations yielding approximately equivalent posteriors and demonstrate that increasing the robust component’s variance mitigates Lindley’s paradox, improves Type I error control, and enhances location-parameter robustness. Contribution/Results: Our framework establishes a principled, operational procedure for hyperparameter selection. It significantly improves statistical efficiency and robustness in mixed-control trials, particularly under high-dimensional or small-sample settings, offering a novel paradigm for cross-group information borrowing in complex clinical trial designs.
📝 Abstract
Robust Mixture Prior (RMP) is a popular Bayesian dynamic borrowing method, which combines an informative historical distribution with a less informative component (referred as robustification component) in a mixture prior to enhance the efficiency of hybrid-control randomized trials. Current practice typically focuses solely on the selection of the prior weight that governs the relative influence of these two components, often fixing the variance of the robustification component to that of a single observation. In this study we demonstrate that the performance of RMPs critically depends on the joint selection of both weight and variance of the robustification component. In particular, we show that a wide range of weight-variance pairs can yield practically identical posterior inferences (in particular regions of the parameter space) and that large variance robust components may be employed without incurring in the so called Lindley's paradox. We further show that the use of large variance robustification components leads to improved asymptotic Type I error control and enhanced robustness of the RMP to the specification of the location parameter of the robustification component. Finally, we leverage these theoretical results to propose a novel and practical hyper-parameter elicitation routine.