🤖 AI Summary
Low-resource languages like Hebrew lack benchmark datasets and rigorous evaluation protocols for abstractive text summarization. Method: We introduce HeSum, the first high-quality Hebrew news summarization benchmark, comprising 10,000 professionally written news articles with human-annotated abstract summaries. Distinct from prior work, we conduct the first systematic linguistic analysis of how Hebrew’s rich morphology—particularly lexical ambiguity and derivational flexibility—affects generative summarization, and design evaluation metrics grounded in linguistic validation that jointly capture abstraction and language-specific properties. Contribution/Results: Experiments reveal substantial performance degradation of state-of-the-art large language models on HeSum, underscoring its difficulty and diagnostic value. HeSum bridges critical gaps in resources and evaluation for low-resource generative NLP, establishing foundational infrastructure to advance multilingual abstractive summarization research.
📝 Abstract
While large language models (LLMs) excel in various natural language tasks in English, their performance in low-resource languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in sentence comprehension and the complexities in meaning construction. In this paper, we address this evaluation and resource gap by introducing HeSum, a novel benchmark dataset specifically designed for Hebrew abstractive text summarization. HeSum consists of 10,000 article-summary pairs sourced from Hebrew news websites written by professionals. Linguistic analysis confirms HeSum’s high abstractness and unique morphological challenges. We show that HeSum presents distinct difficulties even for state-of-the-art LLMs, establishing it as a valuable testbed for advancing generative language technology in Hebrew, and MRLs generative challenges in general.