Inference for Error-Prone Count Data: Estimation under a Binomial Convolution Framework

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the pervasive issue of asymmetric measurement error in bounded count data—such as oral reading fluency scores—by proposing the first binomial convolution-based modeling framework. The framework explicitly models discrete scoring as aggregated counts subject to misclassification, distinguishing true positive and true negative classification accuracies, and enabling unsupervised score calibration. Methodologically, it extends binary misclassification models to the bounded count setting for the first time and systematically compares three parameter estimation strategies: maximum likelihood estimation (MLE), linear regression, and generalized method of moments (GMM). MLE is optimal under correct model specification; GMM offers superior robustness and efficiency. Empirical evaluation on real human–machine scoring data demonstrates substantial improvements in estimation accuracy and assessment reliability.

Technology Category

Application Category

📝 Abstract
Measurement error in count data is common but underexplored in the literature, particularly in contexts where observed scores are bounded and arise from discrete scoring processes. Motivated by applications in oral reading fluency assessment, we propose a binomial convolution framework that extends binary misclassification models to settings where only the aggregate number of correct responses is observed, and errors may involve both overcounting and undercounting the number of events. The model accommodates distinct true positive and true negative accuracy rates and preserves the bounded nature of the data. Assuming the availability of both contaminated and error-free scores on a subset of items, we develop and compare three estimation strategies: maximum likelihood estimation (MLE), linear regression, and generalized method of moments (GMM). Extensive simulations show that MLE is most accurate when the model is correctly specified but is computationally intensive and less robust to misspecification. Regression is simple and stable but less precise, while GMM offers a compromise in model dependence, though it is sensitive to outliers. In practice, this framework supports improved inference in unsupervised settings where contaminated scores serve as inputs to downstream analyses. By quantifying accuracy rates, the model enables score corrections even when no specific outcome is yet defined. We demonstrate its utility using real oral reading fluency data, comparing human and AI-generated scores. Findings highlight the practical implications of estimator choice and underscore the importance of explicitly modeling asymmetric measurement error in count data.
Problem

Research questions and friction points this paper is trying to address.

Estimating error-prone count data under binomial convolution
Comparing MLE, regression, GMM for contaminated score estimation
Modeling asymmetric measurement error in bounded discrete data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Binomial convolution framework for count data
Three estimation strategies: MLE, regression, GMM
Accommodates asymmetric measurement error in counts
🔎 Similar Papers
No similar papers found.
Yuqiu Yang
Yuqiu Yang
UT Southwestern Medical Center
BiostatisticsBioinformaticsMachine learning
C
Christina Vu
Texas Christian University
C
Cornelis J. Potgieter
Texas Christian University, University of Johannesburg
X
Xinlei Wang
University of Texas at Arlington
Akihito Kamata
Akihito Kamata
Southern Methodist University
Educational MeasurementPsychometrics