🤖 AI Summary
This paper addresses the pervasive issue of asymmetric measurement error in bounded count data—such as oral reading fluency scores—by proposing the first binomial convolution-based modeling framework. The framework explicitly models discrete scoring as aggregated counts subject to misclassification, distinguishing true positive and true negative classification accuracies, and enabling unsupervised score calibration. Methodologically, it extends binary misclassification models to the bounded count setting for the first time and systematically compares three parameter estimation strategies: maximum likelihood estimation (MLE), linear regression, and generalized method of moments (GMM). MLE is optimal under correct model specification; GMM offers superior robustness and efficiency. Empirical evaluation on real human–machine scoring data demonstrates substantial improvements in estimation accuracy and assessment reliability.
📝 Abstract
Measurement error in count data is common but underexplored in the literature, particularly in contexts where observed scores are bounded and arise from discrete scoring processes. Motivated by applications in oral reading fluency assessment, we propose a binomial convolution framework that extends binary misclassification models to settings where only the aggregate number of correct responses is observed, and errors may involve both overcounting and undercounting the number of events. The model accommodates distinct true positive and true negative accuracy rates and preserves the bounded nature of the data.
Assuming the availability of both contaminated and error-free scores on a subset of items, we develop and compare three estimation strategies: maximum likelihood estimation (MLE), linear regression, and generalized method of moments (GMM). Extensive simulations show that MLE is most accurate when the model is correctly specified but is computationally intensive and less robust to misspecification. Regression is simple and stable but less precise, while GMM offers a compromise in model dependence, though it is sensitive to outliers.
In practice, this framework supports improved inference in unsupervised settings where contaminated scores serve as inputs to downstream analyses. By quantifying accuracy rates, the model enables score corrections even when no specific outcome is yet defined. We demonstrate its utility using real oral reading fluency data, comparing human and AI-generated scores. Findings highlight the practical implications of estimator choice and underscore the importance of explicitly modeling asymmetric measurement error in count data.