Mind the Gap! Choice Independence in Using Multilingual LLMs for Persuasive Co-Writing Tasks in Different Languages

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates user behavioral biases induced by multilingual large language models (LLMs) in cross-lingual collaborative charity advertising writing. Through a bilingual (English/Spanish) behavioral experiment integrating persuasive effectiveness evaluation, real-world donation measurement, and human-AI ad discrimination testing, we provide the first empirical evidence that users’ performance perceptions of an LLM in one language (e.g., Spanish) transfer to tasks in other languages—violating the axiom of stochastic choice independence. Furthermore, AI-generated labeling significantly reduces donation intent, particularly among Spanish-speaking female participants; and users exhibit poor discriminability between human- and AI-authored advertisements. Our core contribution lies in identifying and characterizing the “cross-lingual trust transfer” phenomenon in multilingual LLM usage, demonstrating its systematic impact on persuasive outcomes and decision-making behavior. These findings offer theoretical grounding and empirical support for designing trustworthy AI-augmented creative systems and guiding culturally adaptive LLM deployment.

Technology Category

Application Category

📝 Abstract
Recent advances in generative AI have precipitated a proliferation of novel writing assistants. These systems typically rely on multilingual large language models (LLMs), providing globalized workers the ability to revise or create diverse forms of content in different languages. However, there is substantial evidence indicating that the performance of multilingual LLMs varies between languages. Users who employ writing assistance for multiple languages are therefore susceptible to disparate output quality. Importantly, recent research has shown that people tend to generalize algorithmic errors across independent tasks, violating the behavioral axiom of choice independence. In this paper, we analyze whether user utilization of novel writing assistants in a charity advertisement writing task is affected by the AI's performance in a second language. Furthermore, we quantify the extent to which these patterns translate into the persuasiveness of generated charity advertisements, as well as the role of peoples' beliefs about LLM utilization in their donation choices. Our results provide evidence that writers who engage with an LLM-based writing assistant violate choice independence, as prior exposure to a Spanish LLM reduces subsequent utilization of an English LLM. While these patterns do not affect the aggregate persuasiveness of the generated advertisements, people's beliefs about the source of an advertisement (human versus AI) do. In particular, Spanish-speaking female participants who believed that they read an AI-generated advertisement strongly adjusted their donation behavior downwards. Furthermore, people are generally not able to adequately differentiate between human-generated and LLM-generated ads. Our work has important implications for the design, development, integration, and adoption of multilingual LLMs as assistive agents -- particularly in writing tasks.
Problem

Research questions and friction points this paper is trying to address.

Multilingual LLMs' performance varies across languages.
AI errors in one language affect another language's use.
Beliefs about AI-generated content influence donation decisions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual LLMs for writing tasks
Choice independence violation analysis
Persuasiveness impact of AI-generated ads
🔎 Similar Papers
No similar papers found.
S
Shreyan Biswas
Delft University of Technology, Delft, The Netherlands
A
Alexander Erlei
University of Goettingen, Goettingen, Germany
Ujwal Gadiraju
Ujwal Gadiraju
Associate Professor, Delft University of Technology
Human-centered AIHuman-AI InteractionCrowd ComputingHuman ComputationInformation Retrieval