🤖 AI Summary
This paper addresses the lack of statistical foundation for stochastic gradient descent (SGD) optimizing mini-batch partial likelihood—rather than the standard partial likelihood—in deep Cox neural networks. Methodologically, it introduces the mini-batch maximum partial likelihood estimator (mb-MPLE) and systematically characterizes how batch size affects estimation consistency, convergence rate, and asymptotic efficiency. Theoretical contributions include: (1) establishing that the SGD estimator in Cox-NN achieves the optimal minimax convergence rate (up to logarithmic factors); (2) rigorously proving √n-consistency and asymptotic normality in classical Cox regression; and (3) quantifying the trade-off between batch size and statistical efficiency, yielding a practical batch-size selection criterion. Experiments demonstrate that, guided by this theory, SGD significantly outperforms full-batch gradient descent on massive survival datasets, achieving both faster convergence and superior statistical validity.
📝 Abstract
Optimizing Cox regression and its neural network variants poses substantial computational challenges in large-scale studies. Stochastic gradient descent (SGD), known for its scalability in model optimization, has recently been adapted to optimize Cox models. Unlike its conventional application, which typically targets a sum of independent individual loss, SGD for Cox models updates parameters based on the partial likelihood of a subset of data. Despite its empirical success, the theoretical foundation for optimizing Cox partial likelihood with SGD is largely underexplored. In this work, we demonstrate that the SGD estimator targets an objective function that is batch-size-dependent. We establish that the SGD estimator for the Cox neural network (Cox-NN) is consistent and achieves the optimal minimax convergence rate up to a polylogarithmic factor. For Cox regression, we further prove the $sqrt{n}$-consistency and asymptotic normality of the SGD estimator, with variance depending on the batch size. Furthermore, we quantify the impact of batch size on Cox-NN training and its effect on the SGD estimator's asymptotic efficiency in Cox regression. These findings are validated by extensive numerical experiments and provide guidance for selecting batch sizes in SGD applications. Finally, we demonstrate the effectiveness of SGD in a real-world application where GD is unfeasible due to the large scale of data.