🤖 AI Summary
This study investigates how large language models (LLMs) revise gendered role nouns (e.g., *outdoorsperson*, *woman*, *man*) in text and whether their revisions—and accompanying self-explanations—align with feminist and transgender-inclusive language reform principles. It is the first systematic examination of whether LLMs exhibit sociolinguistically grounded contextual sensitivity: i.e., dynamically adapting revision strategies to textual context in accordance with evolving language norms. Using a mixed-methods approach—combining expert annotation, qualitative analysis, and sociolinguistic theoretical frameworks—the study evaluates outputs and self-explanations across multiple LLMs. Results reveal strongly context-dependent revision patterns; LLMs partially adhere to progressive linguistic values, particularly in neutralization tendencies and transgender inclusivity. This work provides the first empirical evidence and interpretable insights into value alignment for LLMs in sociolinguistically sensitive language reform tasks.
📝 Abstract
Within the common LLM use case of text revision, we study LLMs' revision of gendered role nouns (e.g., outdoorsperson/woman/man) and their justifications of such revisions. We evaluate their alignment with feminist and trans-inclusive language reforms for English. Drawing on insight from sociolinguistics, we further assess if LLMs are sensitive to the same contextual effects in the application of such reforms as people are, finding broad evidence of such effects. We discuss implications for value alignment.