Mapping Trustworthiness in Large Language Models: A Bibliometric Analysis Bridging Theory to Practice

📅 2025-02-27
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
The conceptual ambiguity of trustworthiness in large language models (LLMs) and the disconnect between theoretical foundations and practical implementation hinder rigorous evaluation and deployment. Method: We conduct a bibliometric analysis and systematic literature review of 2,006 publications (2019–2025), establishing, for the first time, a bidirectional “theory–practice” mapping framework. Contribution/Results: First, we synthesize a tripartite theoretical framework—comprising competence, benevolence, and integrity—from 68 core studies. Second, we map these constructs to 20 actionable, lifecycle-spanning trust-enhancement techniques across training, inference, and deployment, forming a structured technical taxonomy. Third, by innovatively integrating organizational trust theory with LLM engineering practice, we propose an empirically grounded methodology for trustworthiness assessment and enhancement. This work provides a systematic foundation for transparent, accountable, and ethically aligned LLM development and deployment.

Technology Category

Application Category

📝 Abstract
The rapid proliferation of Large Language Models (LLMs) has raised pressing concerns regarding their trustworthiness, spanning issues of reliability, transparency, fairness, and ethical alignment. Despite the increasing adoption of LLMs across various domains, there remains a lack of consensus on how to operationalize trustworthiness in practice. This study bridges the gap between theoretical discussions and implementation by conducting a bibliometric mapping analysis of 2,006 publications from 2019 to 2025. Through co-authorship networks, keyword co-occurrence analysis, and thematic evolution tracking, we identify key research trends, influential authors, and prevailing definitions of LLM trustworthiness. Additionally, a systematic review of 68 core papers is conducted to examine conceptualizations of trust and their practical implications. Our findings reveal that trustworthiness in LLMs is often framed through existing organizational trust frameworks, emphasizing dimensions such as ability, benevolence, and integrity. However, a significant gap exists in translating these principles into concrete development strategies. To address this, we propose a structured mapping of 20 trust-enhancing techniques across the LLM lifecycle, including retrieval-augmented generation (RAG), explainability techniques, and post-training audits. By synthesizing bibliometric insights with practical strategies, this study contributes towards fostering more transparent, accountable, and ethically aligned LLMs, ensuring their responsible deployment in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Assessing trustworthiness in LLMs across reliability, transparency, fairness.
Bridging theory-practice gap in operationalizing LLM trustworthiness metrics.
Proposing trust-enhancing techniques for LLM lifecycle development.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bibliometric mapping of 2,006 LLM trustworthiness publications
Systematic review of 68 core papers on trust conceptualizations
Proposed 20 trust-enhancing techniques like RAG and audits
🔎 Similar Papers
No similar papers found.