🤖 AI Summary
Current document-level machine translation (DMT) lacks high-quality, domain-specific evaluation benchmarks—particularly in vertical domains such as law and finance—while existing resources predominantly adopt sentence-level alignment paradigms, failing to capture authentic document-level phenomena like information reorganization. To address this, we introduce DOLFIN, the first financial-domain DMT benchmark: it uses chapters as the fundamental unit of alignment and evaluation, eschews rigid sentence alignment, and enables context-sensitive and terminology-consistency assessment. Constructed from real-world financial documents and rigorously validated through multi-stage human annotation, DOLFIN covers five language pairs, with an average of 1,950 chapters per pair. Empirical evaluation demonstrates its effectiveness in discriminating context-aware from context-agnostic models, exposing critical weaknesses in mainstream MT systems—including terminology inconsistency and logical misalignment. DOLFIN is publicly released.
📝 Abstract
Despite the strong research interest in document-level Machine Translation (MT), the test sets dedicated to this task are still scarce. The existing test sets mainly cover topics from the general domain and fall short on specialised domains, such as legal and financial. Also, in spite of their document-level aspect, they still follow a sentence-level logic that does not allow for including certain linguistic phenomena such as information reorganisation. In this work, we aim to fill this gap by proposing a novel test set: DOLFIN. The dataset is built from specialised financial documents, and it makes a step towards true document-level MT by abandoning the paradigm of perfectly aligned sentences, presenting data in units of sections rather than sentences. The test set consists of an average of 1950 aligned sections for five language pairs. We present a detailed data collection pipeline that can serve as inspiration for aligning new document-level datasets. We demonstrate the usefulness and quality of this test set by evaluating a number of models. Our results show that the test set is able to discriminate between context-sensitive and context-agnostic models and shows the weaknesses when models fail to accurately translate financial texts. The test set is made public for the community.