π€ AI Summary
Deep learning survival models achieve high predictive accuracy but suffer from limited clinical trustworthiness due to their βblack-boxβ nature. To address time-to-event prediction, this paper introduces the first gradient-based interpretability framework for survival analysis, centered on Time-Aware GradSHAP(t)βthe first method integrating gradient backpropagation with SHAP theory in survival modeling, supporting multimodal inputs and dynamic temporal attribution. Theoretically, we establish differentiability and consistency guarantees for survival gradient explanations; practically, we propose a standardized visualization paradigm. Extensive evaluation on synthetic and real-world clinical datasets demonstrates that GradSHAP(t) significantly outperforms SurvSHAP(t) and SurvLIME, achieving a 4.7% AUC improvement and 3.2Γ speedup in computation, while preserving both local and global interpretability and accurately capturing time-varying feature importance.
π Abstract
Deep learning survival models often outperform classical methods in time-to-event predictions, particularly in personalized medicine, but their"black box"nature hinders broader adoption. We propose a framework for gradient-based explanation methods tailored to survival neural networks, extending their use beyond regression and classification. We analyze the implications of their theoretical assumptions for time-dependent explanations in the survival setting and propose effective visualizations incorporating the temporal dimension. Experiments on synthetic data show that gradient-based methods capture the magnitude and direction of local and global feature effects, including time dependencies. We introduce GradSHAP(t), a gradient-based counterpart to SurvSHAP(t), which outperforms SurvSHAP(t) and SurvLIME in a computational speed vs. accuracy trade-off. Finally, we apply these methods to medical data with multi-modal inputs, revealing relevant tabular features and visual patterns, as well as their temporal dynamics.