🤖 AI Summary
This work addresses the limitations of existing grayscale image colorization methods—namely, insufficient modeling of color style diversity, physically implausible outputs, and low visual fidelity. To this end, we introduce symmetric positive-definite (SPD) manifold geometry into the generative adversarial network (GAN) framework for the first time. Specifically, we model color covariance priors on the SPD manifold and impose Riemannian metric constraints on the generator’s output to mitigate distributional bias inherent in Euclidean-space modeling. Our method incorporates Cholesky parameterization, a manifold projection layer, and Riemannian gradient descent to ensure stable optimization. Evaluated on ImageNet and COCO, our approach achieves state-of-the-art performance, improving PSNR by 2.1 dB and SSIM by 0.032 over prior methods. Notably, it excels in color consistency and fine-grained texture recovery, demonstrating both physical plausibility and enhanced perceptual quality.