๐ค AI Summary
To address inevitable label noise in human-annotated NLP data, this paper proposes a single-pass, efficient, and robust annotation weighting method. The core innovation lies in the first introduction of subword regularization for label noise detection, synergistically integrating token-level perturbations with uncertainty modeling to enable fine-grained, token-level confidence estimationโwithout requiring ensemble models or iterative retraining. Empirically, the method significantly enhances model robustness against erroneous labels, delivering consistent performance gains on document classification and named entity recognition tasks; it further achieves high accuracy in identifying anomalous annotations under controlled pseudo-noise settings. Compared to conventional multi-model weighting approaches, it accelerates inference by 4โ5ร. The implementation is publicly available.
๐ Abstract
NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh .