🤖 AI Summary
This study investigates the systematic value modulation effects of large language models (LLMs) when rewriting contentious value-laden arguments—such as those concerning same-sex marriage and Islam—and the associated risk of value homogenization. Method: Through controlled comparative experiments—pitting human-authored comments against LLM-rewritten versions—cross-cultural participant evaluations, and quantitative alignment analysis with established value frameworks (e.g., Schwartz’s Theory of Basic Values), we assess shifts in value expression. Contribution/Results: LLMs consistently attenuate conservative values (e.g., tradition, conformity) while amplifying prosocial orientations such as benevolence and universalism. Crucially, opponents of the contested issues rate original human comments as more credible and representative, whereas supporters prefer LLM-rewritten versions. These findings demonstrate that LLMs are not value-neutral but actively reshape online discourse through implicit value preferences. This work provides the first empirical evidence of LLM-driven value convergence in contentious domains and elucidates its sociotechnical implications for democratic deliberation and ideological diversity.
📝 Abstract
Large language models (LLMs) are increasingly used to promote prosocial and constructive discourse online. Yet little is known about how they negotiate and shape underlying values when reframing people's arguments on value-laden topics. We conducted experiments with 347 participants from India and the United States, who wrote constructive comments on homophobic and Islamophobic threads, and reviewed human-written and LLM-rewritten versions of these comments. Our analysis shows that LLM systematically diminishes Conservative values while elevating prosocial values such as Benevolence and Universalism. When these comments were read by others, participants opposing same-sex marriage or Islam found human-written comments more aligned with their values, whereas those supportive of these communities found LLM-rewritten versions more aligned with their values. These findings suggest that LLM-driven value homogenization can shape how diverse viewpoints are represented in contentious debates on value-laden topics and may influence the dynamics of online discourse critically.