🤖 AI Summary
In reference-free machine translation evaluation, error span detection (ESD) requires precise localization of translation errors and accurate severity classification. Existing generative approaches predominantly rely on maximum a posteriori (MAP) decoding, yet their likelihood estimates exhibit substantial divergence from human annotations. This work introduces minimum Bayes risk (MBR) decoding to generative ESD for the first time, employing sentence-level and span-level similarity as utility functions to explicitly optimize alignment with human judgments. Furthermore, we propose an MBR distillation strategy that retains model performance while reducing inference latency to greedy-decoding levels. Experiments demonstrate that our method consistently outperforms MAP baselines across system-, sentence-, and span-level evaluations, significantly improving both error localization accuracy and severity classification fidelity, while substantially decreasing computational overhead.
📝 Abstract
Error Span Detection (ESD) is a subtask of automatic machine translation evaluation that localizes error spans in translations and labels their severity. State-of-the-art generative ESD methods typically decode using Maximum a Posteriori (MAP), assuming that model-estimated probabilities are perfectly correlated with similarity to human annotation. However, we observed that annotations dissimilar to the human annotation could achieve a higher model likelihood than the human annotation. We address this issue by applying Minimum Bayes Risk (MBR) decoding to generative ESD models. Specifically, we employ sentence- and span-level similarity metrics as utility functions to select candidate hypotheses based on their approximate similarity to the human annotation. Extensive experimental results show that our MBR decoding outperforms the MAP baseline at the system, sentence, and span-levels. Furthermore, to mitigate the computational cost of MBR decoding, we demonstrate that applying MBR distillation enables a standard greedy model to match MBR decoding performance, effectively eliminating the inference-time latency bottleneck.