Survival Analysis with Adversarial Regularization

๐Ÿ“… 2023-12-26
๐Ÿ›๏ธ IEEE International Conference on Healthcare Informatics
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the degradation of neural network robustness in survival analysis caused by label noise and annotation errors, this paper proposes an adversarial regularization training framework based on minimax optimization. It introduces CROWN-IBPโ€”a formal verification methodโ€”into survival modeling for the first time, enabling a differentiable and scalable adversarial robust loss function that provides rigorous robustness guarantees for parametric survival models (e.g., Weibull and Log-Normal). Evaluated on 10 SurvSet benchmark datasets, the method consistently outperforms state-of-the-art deep survival models and existing adversarial approaches: it achieves significant improvements in negative log-likelihood (NegLL), integrated Brier score (IBS), and concordance index (CI), with robust generalization performance improving by up to 150%. The approach thus delivers both theoretically verifiable robustness and empirically superior performance in survival modeling.
๐Ÿ“ Abstract
Survival Analysis (SA) models the time until an event occurs, with applications in fields like medicine, defense, finance, and aerospace. Recent research indicates that Neural Networks (NNs) can effectively capture complex data patterns in SA, whereas simple generalized linear models often fall short in this regard. However, dataset uncertainties (e.g., noisy measurements, human error) can degrade NN model performance. To address this, we leverage advances in NN verification to develop training objectives for robust, fully-parametric SA models. Specifically, we propose an adversarially robust loss function based on a Min-Max optimization problem. We employ CROWN-Interval Bound Propagation (CROWN-IBP) to tackle the computational challenges inherent in solving this Min-Max problem. Evaluated over 10 SurvSet datasets, our method, Survival Analysis with Adversarial Regularization (SAWAR), consistently outperforms baseline adversarial training methods and state-of-the-art (SOTA) deep SA models across various covariate perturbations with respect to Negative Log Likelihood (NegLL), Integrated Brier Score (IBS), and Concordance Index (CI) metrics. Thus, we demonstrate that adversarial robustness enhances SA predictive performance and calibration, mitigating data uncertainty and improving generalization across diverse datasets by up to 150% compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Enhancing survival analysis model robustness against dataset uncertainties
Addressing computational challenges in adversarial training for survival models
Improving predictive performance and calibration under covariate perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarially robust loss function via Min-Max optimization
CROWN-IBP for computational efficiency in training
Enhanced survival prediction under covariate perturbations
๐Ÿ”Ž Similar Papers
No similar papers found.