Minimum Bayes Risk Decoding for Error Span Detection in Reference-Free Automatic Machine Translation Evaluation

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In reference-free machine translation evaluation, error span detection (ESD) requires precise localization of translation errors and accurate severity classification. Existing generative approaches predominantly rely on maximum a posteriori (MAP) decoding, yet their likelihood estimates exhibit substantial divergence from human annotations. This work introduces minimum Bayes risk (MBR) decoding to generative ESD for the first time, employing sentence-level and span-level similarity as utility functions to explicitly optimize alignment with human judgments. Furthermore, we propose an MBR distillation strategy that retains model performance while reducing inference latency to greedy-decoding levels. Experiments demonstrate that our method consistently outperforms MAP baselines across system-, sentence-, and span-level evaluations, significantly improving both error localization accuracy and severity classification fidelity, while substantially decreasing computational overhead.

Technology Category

Application Category

📝 Abstract
Error Span Detection (ESD) is a subtask of automatic machine translation evaluation that localizes error spans in translations and labels their severity. State-of-the-art generative ESD methods typically decode using Maximum a Posteriori (MAP), assuming that model-estimated probabilities are perfectly correlated with similarity to human annotation. However, we observed that annotations dissimilar to the human annotation could achieve a higher model likelihood than the human annotation. We address this issue by applying Minimum Bayes Risk (MBR) decoding to generative ESD models. Specifically, we employ sentence- and span-level similarity metrics as utility functions to select candidate hypotheses based on their approximate similarity to the human annotation. Extensive experimental results show that our MBR decoding outperforms the MAP baseline at the system, sentence, and span-levels. Furthermore, to mitigate the computational cost of MBR decoding, we demonstrate that applying MBR distillation enables a standard greedy model to match MBR decoding performance, effectively eliminating the inference-time latency bottleneck.
Problem

Research questions and friction points this paper is trying to address.

Improves error localization in machine translation evaluation using risk-aware decoding
Addresses mismatch between model likelihood and human annotation similarity
Reduces computational cost while maintaining detection accuracy across levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

MBR decoding replaces MAP for error detection
Uses similarity metrics to select best hypotheses
MBR distillation reduces computational cost efficiently
🔎 Similar Papers
No similar papers found.
B
Boxuan Lyu
Institute of Science Tokyo
H
Haiyue Song
National Institute of Information and Communications Technology
Hidetaka Kamigaito
Hidetaka Kamigaito
Nara Institute of Science and Technology (NAIST)
Natural Language Processing
C
Chenchen Ding
National Institute of Information and Communications Technology
H
Hideki Tanaka
National Institute of Information and Communications Technology
Masao Utiyama
Masao Utiyama
NICT
Machine Translation
Kotaro Funakoshi
Kotaro Funakoshi
Tokyo Institute of Technology
Multimodal Dialogue SystemsHuman-Machine InteractionComputational Linguistics
M
Manabu Okumura
Institute of Science Tokyo