🤖 AI Summary
This work addresses the overestimation of model robustness in multilingual intent classification caused by existing benchmarks that rely on synthetic data, which fails to reflect real-world noisy user queries. To bridge this gap, the authors construct a public multilingual hierarchical intent classification benchmark based on authentic logistics customer service logs, covering languages such as English, Spanish, and Arabic. For the first time, this benchmark provides paired native and machine-translated test sets, enabling zero-shot cross-lingual evaluation. High-quality data are curated through a combination of log filtering, large language model–assisted quality control, and human validation. Comprehensive evaluations of multilingual encoders, embedding models, and small language models under both flat and hierarchical settings reveal that machine-translated test sets significantly overestimate performance on native noisy data—particularly for long-tail intents and cross-lingual transfer—highlighting the necessity of this benchmark for realistic assessment.
📝 Abstract
Multilingual intent classification is central to customer-service systems on global logistics platforms, where models must process noisy user queries across languages and hierarchical label spaces. Yet most existing multilingual benchmarks rely on machine-translated text, which is typically cleaner and more standardized than native customer requests and can therefore overestimate real-world robustness. We present a public benchmark for hierarchical multilingual intent classification constructed from real logistics customer-service logs. The dataset contains approximately 30K de-identified, stand-alone user queries curated from 600K historical records through filtering, LLM-assisted quality control, and human verification, and is organized into a two-level taxonomy with 13 parent and 17 leaf intents. English, Spanish, and Arabic are included as seen languages, while Indonesian, Chinese, and additional test-only languages support zero-shot evaluation. To directly measure the gap between synthetic and real evaluation, we provide paired native and machine-translated test sets and benchmark multilingual encoders, embedding models, and small language models under flat and hierarchical protocols. Results show that translated test sets substantially overestimate performance on noisy native queries, especially for long-tail intents and cross-lingual transfer, underscoring the need for more realistic multilingual intent benchmarks.