🤖 AI Summary
This study addresses the well-posedness of the posterior distribution in fully Bayesian functional principal component analysis (FPCA). By projecting functions onto a spline basis, functional orthogonality is recast as an orthogonality constraint on the coefficients, and a smoothing penalty based on the integral of the squared second derivative is incorporated. The smoothing parameter is treated as the inverse variance component in a mixed-effects model. The work innovatively formulates the smoothing penalty through the eigenvalues of its design matrix and, for the first time, establishes a sufficient condition for posterior well-posedness. This result yields a simple and practical criterion for selecting priors on the smoothing parameter, thereby substantially enhancing the stability and interpretability of Bayesian FPCA models.
📝 Abstract
In a fully-Bayesian Functional Principal Components Analysis (FPCA) the principal components are treated as unknown infinite-dimensional parameters. By projecting the functional principal components on a rich orthonormal spline basis, we show that orthonormality of the principal components is equivalent with the orthonormality of the spline coefficients. A penalty on the integral of the second derivative of the functional principal components can be induced on the spline coefficients, where each function has its own smoothing parameter. Finally, each smoothing parameter is treated as an inverse variance component in the associated mixed effects model. In this paper we provide sufficient conditions to ensure that the posterior distribution is proper. This condition is expressed in terms of the eigenvalues of the smoothing penalty design matrix, which provides a practical and simple choice for the prior on the smoothing parameters.