π€ AI Summary
Existing research on generic sentences is hindered by the absence of large-scale, diverse corpora of naturally occurring generics. To address this, we introduce MGenβthe first large-scale, naturally occurring generic corpus comprising over 4 million sentences, covering 11 categories of quantificational expressions, all extracted from authentic long-text contexts such as full web pages and academic papers. Methodologically, we propose a hybrid rule- and model-based pipeline for automatic extraction and cleaning that preserves original discourse context and enables fine-grained annotation of quantifier types. MGen substantially enhances lexical, syntactic, and pragmatic diversity while improving ecological validity; linguistic analysis reveals generics are typically longer and frequently employed for population-level generalizations. Publicly released, MGen constitutes the largest and richest natural generic resource to date, enabling robust research in genericity identification, language modeling, and genericity quantification.
π Abstract
MGen is a dataset of over 4 million naturally occurring generic and quantified sentences extracted from diverse textual sources. Sentences in the dataset have long context documents, corresponding to websites and academic papers, and cover 11 different quantifiers. We analyze the features of generics sentences in the dataset, with interesting insights: generics can be long sentences (averaging over 16 words) and speakers often use them to express generalisations about people.
MGen is the biggest and most diverse dataset of naturally occurring generic sentences, opening the door to large-scale computational research on genericity. It is publicly available at https://gustavocilleruelo.com/mgen