Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The widespread deployment of large language models (LLMs) in high-stakes domains such as politics has triggered a severe trust crisis; existing trust models rely on unidirectional, oversimplified assumptions and fail to capture the complex bidirectional interplay between user-contextual factors (e.g., ideological orientation, prior experience) and system-level attributes (e.g., transparency, human-AI collaboration). Method: We propose the first “bowtie” trust model for LLMs—a structured, socio-technical framework grounded in mixed methods, political discourse analysis, and controlled ChatGPT experiments. Contribution/Results: Our model uniquely explicates trust’s nested socio-technical nature and bidirectional dynamics. Empirical findings identify users’ prior experience and familiarity as primary trust drivers; human-AI collaboration significantly enhances trust; conversely, opacity critically erodes it. This work establishes a theoretical foundation and empirical basis for trustworthy LLM governance.

Technology Category

Application Category

📝 Abstract
The rapid and unprecedented dominance of Artificial Intelligence (AI), particularly through Large Language Models (LLMs), has raised critical trust challenges in high-stakes domains like politics. Biased LLMs' decisions and misinformation undermine democratic processes, and existing trust models fail to address the intricacies of trust in LLMs. Currently, oversimplified, one-directional approaches have largely overlooked the many relationships between trustor (user) contextual factors (e.g. ideology, perceptions) and trustee (LLMs) systemic elements (e.g. scientists, tool's features). In this work, we introduce a bowtie model for holistically conceptualizing and formulating trust in LLMs, with a core component comprehensively exploring trust by tying its two sides, namely the trustor and the trustee, as well as their intricate relationships. We uncover these relationships within the proposed bowtie model and beyond to its sociotechnical ecosystem, through a mixed-methods explanatory study, that exploits a political discourse analysis tool (integrating ChatGPT), by exploring and responding to the next critical questions: 1) How do trustor's contextual factors influence trust-related actions? 2) How do these factors influence and interact with trustee systemic elements? 3) How does trust itself vary across trustee systemic elements? Our bowtie-based explanatory analysis reveals that past experiences and familiarity significantly shape trustor's trust-related actions; not all trustor contextual factors equally influence trustee systemic elements; and trustee's human-in-the-loop features enhance trust, while lack of transparency decreases it. Finally, this solid evidence is exploited to deliver recommendations, insights and pathways towards building robust trusting ecosystems in LLM-based solutions.
Problem

Research questions and friction points this paper is trying to address.

Examines trust challenges in LLMs within high-stakes domains like politics
Addresses oversimplified trust models ignoring trustor-trustee relationships
Proposes bowtie model to analyze trustor and trustee interactions holistically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bowtie model for trustor-trustee relationships in LLMs
Mixed-methods study with political discourse analysis
Human-in-the-loop features enhance LLM trust