🤖 AI Summary
In distributed principal component analysis (PCA) for high-dimensional spiked covariance models, local estimators suffer from bias that undermines global consistency—especially when the number of machines is finite, leading to significant performance degradation; existing methods achieve consistency only asymptotically as the number of machines diverges.
Method: We propose a debiased distributed PCA algorithm featuring a novel local bias-correction mechanism and adaptive sparse support set detection, applicable to both sample covariance and correlation matrices.
Contribution/Results: Our theoretical analysis establishes the first consistency guarantee under merely sixth-order moment assumptions—without requiring the number of machines to grow unbounded. The derived error bound strictly improves upon prior art, yielding enhanced accuracy and stability in small-scale clusters and sparse PCA settings. Extensive simulations and real-data experiments validate the method’s robustness and superiority.
📝 Abstract
We study distributed principal component analysis (PCA) in high-dimensional settings under the spiked model. In such regimes, sample eigenvectors can deviate significantly from population ones, introducing a persistent bias. Existing distributed PCA methods are sensitive to this bias, particularly when the number of machines is small. Their consistency typically relies on the number of machines tending to infinity. We propose a debiased distributed PCA algorithm that corrects the local bias before aggregation and incorporates a sparsity-detection step to adaptively handle sparse and non-sparse eigenvectors. Theoretically, we establish the consistency of our estimator under much weaker conditions compared to existing literature. In particular, our approach does not require symmetric innovations and only assumes a finite sixth moment. Furthermore, our method generally achieves smaller estimation error, especially when the number of machines is small. Empirically, extensive simulations and real data experiments demonstrate that our method consistently outperforms existing distributed PCA approaches. The advantage is especially prominent when the leading eigenvectors are sparse or the number of machines is limited. Our method and theoretical analysis are also applicable to the sample correlation matrix.