🤖 AI Summary
This work proposes a zero-shot transfer approach that enables off-the-shelf tabular foundation models to perform survival prediction on right-censored data without explicit training. By discretizing event times, both static and dynamic survival problems are reformulated as a sequence of binary classification tasks, with in-context learning leveraged to handle censored observations. Theoretical analysis establishes, for the first time under standard censoring assumptions, that the employed classification loss asymptotically recovers the true survival probabilities. Extensive experiments across 53 real-world datasets demonstrate that the method consistently outperforms conventional and deep learning baselines across multiple survival metrics, substantially expanding the applicability of tabular foundation models to survival analysis.
📝 Abstract
While tabular foundation models have achieved remarkable success in classification and regression, adapting them to model time-to-event outcomes for survival analysis is non-trivial due to right-censoring, where data observations may end before the event occurs. We develop a classification-based framework that reformulates both static and dynamic survival analysis as a series of binary classification problems by discretizing event times. Censored observations are naturally handled as examples with missing labels at certain time points. This classification formulation enables existing tabular foundation models to perform survival analysis through in-context learning without explicit training. We prove that under standard censoring assumptions, minimizing our binary classification loss recovers the true survival probabilities as the training set size increases. We demonstrate through evaluation across $53$ real-world datasets that off-the-shelf tabular foundation models with this classification formulation outperform classical and deep learning baselines on average over multiple survival metrics.