Asymptotic theory and bias correction for the Wallace--Freeman estimator

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the long-standing lack of a systematic theoretical foundation for the large-sample properties of the Wallace–Freeman estimator, particularly concerning its existence, consistency, and bias structure. By reformulating the estimator as a penalized M-estimator with specific penalty weights, the work integrates it into the modern penalized likelihood framework for the first time. Within this framework, the authors rigorously establish its existence, consistency, asymptotic linear expansion, and asymptotic normality. Building on these results, they derive an explicit bias correction formula accurate to order $O(n^{-1})$, showing that the discrepancy in bias relative to maximum likelihood estimation originates from the gradient of the penalty term. The classical Cox–Snell bias formula is thereby extended to this estimator. Using the Weibull model as an illustration, the study quantifies the penalty’s impact on bias, providing a solid theoretical basis for inference involving this estimator.
📝 Abstract
The Wallace--Freeman estimator is a classical invariant point estimator whose large-sample properties have not been fully developed in a modern asymptotic framework. We show that the estimator can be formulated as a penalised M-estimator with a specific penalty weight, yielding a unified route to its asymptotic analysis. This representation allows us to establish existence, consistency, an asymptotic linear expansion, and asymptotic normality under standard regularity conditions. We further derive the first-order difference between the Wallace--Freeman estimator and the maximum likelihood estimator, and show that this induces an explicit $O(n^{-1})$ bias correction determined by the gradient of the penalty. As a consequence, the Cox--Snell bias formula for the maximum likelihood estimator extends naturally to the Wallace--Freeman estimator by the addition of a penalty-driven correction term. As an illustration, we derive the first-order bias of the Wallace--Freeman estimator for the Weibull model and show how the penalty modifies the corresponding maximum likelihood bias. These results place the Wallace--Freeman estimator within the general theory of penalised likelihood and provide a rigorous asymptotic basis for its use in parametric inference.
Problem

Research questions and friction points this paper is trying to address.

Wallace--Freeman estimator
asymptotic theory
bias correction
penalised M-estimator
parametric inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

penalised M-estimator
asymptotic normality
bias correction
Wallace–Freeman estimator
penalty-driven correction
🔎 Similar Papers
No similar papers found.
Enes Makalic
Enes Makalic
Professor, Faculty of Information Technology, Monash University
Artificial IntelligenceStatisticsMachine Learning
D
Daniel F. Schmidt
Faculty of Information Technology, Monash University, Clayton, VIC 3800