Do Generalized=Gamma Scale Mixtures of Normals Fit Large Image Data-Sets?

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the universality and practical efficacy of the Generalized Gamma Scale Mixture of Normals (GGSN) as a sparse prior for image transform-domain coefficients. We systematically evaluate its modeling capability on Fourier coefficients, Haar/Gabor wavelet coefficients, and first-layer AlexNet convolutional features across remote sensing, medical, and natural image datasets. For the first time, large-scale empirical validation confirms GGSN’s broad applicability across diverse real-world image transforms, revealing an effective parameter space substantially larger than previously reported. To enhance fitting robustness, we introduce exchangeable coefficient identification and data augmentation strategies. Across all test scenarios, GGSN consistently outperforms conventional priors—including Gaussian, Laplacian, ℓₚ-norm, and Student’s t-distributions—in terms of goodness-of-fit. Furthermore, we characterize the specific structural image patterns under which GGSN modeling degrades, thereby delineating its practical limits.

Technology Category

Application Category

📝 Abstract
A scale mixture of normals is a distribution formed by mixing a collection of normal distributions with fixed mean but different variances. A generalized gamma scale mixture draws the variances from a generalized gamma distribution. Generalized gamma scale mixtures of normals have been proposed as an attractive class of parametric priors for Bayesian inference in inverse imaging problems. Generalized gamma scale mixtures have two shape parameters, one that controls the behavior of the distribution about its mode, and the other that controls its tail decay. In this paper, we provide the first demonstration that the prior model is realistic for multiple large imaging data sets. We draw data from remote sensing, medical imaging, and image classification applications. We study the realism of the prior when applied to Fourier and wavelet (Haar and Gabor) transformations of the images, as well as to the coefficients produced by convolving the images against the filters used in the first layer of AlexNet, a popular convolutional neural network trained for image classification. We discuss data augmentation procedures that improve the fit of the model, procedures for identifying approximately exchangeable coefficients, and characterize the parameter regions that best describe the observed data sets. These regions are significantly broader than the region of primary focus in computational work. We show that this prior family provides a substantially better fit to each data set than any of the standard priors it contains. These include Gaussian, Laplace, $ell_p$, and Student's $t$ priors. Finally, we identify cases where the prior is unrealistic and highlight characteristic features of images that suggest the model will fit poorly.
Problem

Research questions and friction points this paper is trying to address.

Evaluates generalized gamma scale mixtures for large image datasets
Assesses model realism across diverse imaging applications and transformations
Compares performance against standard priors and identifies limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalized gamma scale mixtures model image data priors
Fits multiple large imaging datasets across transformations
Outperforms standard priors like Gaussian and Laplace
🔎 Similar Papers
No similar papers found.
B
Brandon Marks
Stanford University. Statistics
Y
Yash Dave
Stanford University. Institute for Computational and Mathematical Engineering
Z
Zixun Wang
University of California, Berkeley. Statistics
H
Hannah Chung
University of California, Berkeley. Statistics
R
Riya Patwa
University of California, Berkeley. Statistics
S
Simon Cha
University of California, Berkeley. Statistics
M
Michael Murphy
University of California, Berkeley. Statistics
Alexander Strang
Alexander Strang
Assistant Professor, UC Berkeley
stochastic processesgraph theoryconvex optimizationBayesian inference