Maximum softly penalised likelihood in factor analysis

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In exploratory factor analysis, Heywood cases—characterized by nonpositive variance estimates—frequently cause optimization failure and biased inference. To address this, we propose a soft-penalty likelihood method that incorporates an adaptively scaled penalty term into the log-likelihood, enforcing strictly positive variance estimates and ensuring parameter estimates reside in the interior of the parameter space, thereby enhancing numerical stability and inferential reliability. We establish, for the first time, sufficient conditions guaranteeing the existence, consistency, and asymptotic normality of the maximum penalized likelihood estimator. Moreover, we prove that AIC and Hirose-type criteria retain model selection consistency under appropriate penalty scaling. Simulation studies and empirical applications demonstrate that the proposed method substantially reduces convergence failure rates and improves finite-sample estimation accuracy, model selection correctness, and factor score stability.

Technology Category

Application Category

📝 Abstract
Estimation in exploratory factor analysis often yields estimates on the boundary of the parameter space. Such occurrences, known as Heywood cases, are characterised by non-positive variance estimates and can cause issues in numerical optimisation procedures or convergence failures, which, in turn, can lead to misleading inferences, particularly regarding factor scores and model selection. We derive sufficient conditions on the model and a penalty to the log-likelihood function that i) guarantee the existence of maximum penalised likelihood estimates in the interior of the parameter space, and ii) ensure that the corresponding estimators possess the desirable asymptotic properties expected by the maximum likelihood estimator, namely consistency and asymptotic normality. Consistency and asymptotic normality are achieved when the penalisation is soft enough, in a way that adapts to the information accumulation about the model parameters. We formally show, for the first time, that the penalties of Akaike (1987) and Hirose et al. (2011) to the log-likelihood of the normal linear factor model satisfy the conditions for existence, and, hence, deal with Heywood cases. Their vanilla versions, though, can result in questionable finite-sample properties in estimation, inference, and model selection. The maximum softly-penalised likelihood framework we introduce enables the careful scaling of those penalties to ensure that the resulting estimation and inference procedures are asymptotically optimal. Through comprehensive simulation studies and the analysis of real data sets, we illustrate the desirable finite-sample properties of the maximum softly penalised likelihood estimators and associated procedures.
Problem

Research questions and friction points this paper is trying to address.

Addressing Heywood cases in factor analysis estimation
Ensuring maximum likelihood estimates remain within parameter space
Achieving asymptotic consistency and normality through soft penalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Soft penalisation ensures interior parameter space estimates
Penalties guarantee asymptotic consistency and normality
Scaled penalties provide optimal finite-sample performance
🔎 Similar Papers
No similar papers found.