🤖 AI Summary
To address inaccurate parameter estimation and poor interpretability of generalized normal mixture models under high component overlap, this paper proposes the Constrained Mixture of Generalized Normal Distributions (CMGND), the first such model to incorporate customizable equality constraints enabling multiple components to share location, scale, or shape parameters. Methodologically, we develop a constrained maximum likelihood algorithm that jointly iterates the Expectation-Conditional Maximization (ECM) and Newton–Raphson procedures. In both simulation studies and empirical analysis of stock returns, CMGND significantly improves model fit—as measured by the Bayesian Information Criterion—relative to unconstrained generalized normal, constrained t-, and normal mixture models. It more accurately captures leptokurtosis, heavy tails, and heterogeneous kurtosis while balancing parametric efficiency with distributional flexibility.
📝 Abstract
This work introduces a family of univariate constrained mixtures of generalized normal distributions (CMGND) where the location, scale, and shape parameters can be constrained to be equal across any subset of mixture components. An expectation conditional maximisation (ECM) algorithm with Newton-Raphson updates is used to estimate the model parameters under the constraints. Simulation studies demonstrate that imposing correct constraints leads to more accurate parameter estimation compared to unconstrained mixtures, especially when components substantially overlap. Constrained models also exhibit competitive performance in capturing key characteristics of the marginal distribution, such as kurtosis. On a real dataset of daily stock index returns, CMGND models outperform constrained mixtures of normals and Student's t distributions based on the BIC criterion, highlighting their flexibility in modelling nonnormal features. The proposed constrained approach enhances interpretability and can improve parametric efficiency without compromising distributional flexibility for complex data.