Uniform Information Density and Syntactic Reduction: Revisiting $ extit{that}$-Mentioning in English Complement Clauses

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether optional complementizer “that” omission in English complement clauses is regulated by information density—specifically, whether “that” is more likely omitted when the embedded clause is low-density (i.e., more predictable). Method: Grounded in the Uniform Information Density hypothesis, the study innovatively integrates large-scale contemporary spoken corpora (e.g., COCA, Switchboard) with neural language models (BERT), using context-sensitive word embeddings to dynamically estimate clause-level information density—thereby overcoming lexical specificity biases inherent in traditional subcategorization probability approaches. Contribution/Results: Machine learning modeling and statistical analysis robustly replicate the inverse relationship between information density and “that” omission. Crucially, context-embedding–based density measures significantly improve explanatory power for complementizer variation (ΔR² > 0.12), offering more generalizable and fine-grained empirical support for information-theoretic accounts of syntactic reduction.

Technology Category

Application Category

📝 Abstract
Speakers often have multiple ways to express the same meaning. The Uniform Information Density (UID) hypothesis suggests that speakers exploit this variability to maintain a consistent rate of information transmission during language production. Building on prior work linking UID to syntactic reduction, we revisit the finding that the optional complementizer $ extit{that}$in English complement clauses is more likely to be omitted when the clause has low information density (i.e., more predictable). We advance this line of research by analyzing a large-scale, contemporary conversational corpus and using machine learning and neural language models to refine estimates of information density. Our results replicated the established relationship between information density and $ extit{that}$-mentioning. However, we found that previous measures of information density based on matrix verbs' subcategorization probability capture substantial idiosyncratic lexical variation. By contrast, estimates derived from contextual word embeddings account for additional variance in patterns of complementizer usage.
Problem

Research questions and friction points this paper is trying to address.

Analyzing optional complementizer omission in English clauses
Testing Uniform Information Density hypothesis in language production
Refining information density estimates using neural language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using machine learning for information density estimation
Analyzing large-scale conversational corpus data
Employing contextual word embeddings for variance analysis
🔎 Similar Papers
No similar papers found.