Explainability of Large Language Models: Opportunities and Challenges toward Generating Trustworthy Explanations

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The opaque prediction mechanisms and propensity for hallucination in large language models (LLMs) severely hinder their trustworthy deployment in high-stakes domains such as healthcare and autonomous driving. Method: This work investigates local and mechanistic interpretability in Transformer architectures, proposing a human-aligned trustworthiness framework that systematically integrates attribution analysis, attention visualization, concept activation detection, and reasoning trace tracking—validated through cross-domain case studies and user comprehension experiments. Contribution/Results: We provide the first empirical evidence of how explanation modalities influence user trust, identifying causal validity, generalizability, and evaluation standardization as core challenges in current interpretability research. Our findings establish a methodological foundation and practical guidelines for developing trustworthy LLMs, advancing both theoretical understanding and real-world deployment safety.

Technology Category

Application Category

📝 Abstract
Large language models have exhibited impressive performance across a broad range of downstream tasks in natural language processing. However, how a language model predicts the next token and generates content is not generally understandable by humans. Furthermore, these models often make errors in prediction and reasoning, known as hallucinations. These errors underscore the urgent need to better understand and interpret the intricate inner workings of language models and how they generate predictive outputs. Motivated by this gap, this paper investigates local explainability and mechanistic interpretability within Transformer-based large language models to foster trust in such models. In this regard, our paper aims to make three key contributions. First, we present a review of local explainability and mechanistic interpretability approaches and insights from relevant studies in the literature. Furthermore, we describe experimental studies on explainability and reasoning with large language models in two critical domains -- healthcare and autonomous driving -- and analyze the trust implications of such explanations for explanation receivers. Finally, we summarize current unaddressed issues in the evolving landscape of LLM explainability and outline the opportunities, critical challenges, and future directions toward generating human-aligned, trustworthy LLM explanations.
Problem

Research questions and friction points this paper is trying to address.

Investigating explainability of Transformer-based large language models
Addressing model errors and hallucinations through interpretability
Generating trustworthy explanations for healthcare and autonomous driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates local explainability in Transformer models
Studies mechanistic interpretability to foster trust
Analyzes trust implications in healthcare and autonomous driving
🔎 Similar Papers
No similar papers found.
S
Shahin Atakishiyev
University of Alberta
H
Housam K. B. Babiker
University of Alberta
J
Jiayi Dai
University of Alberta
Nawshad Farruque
Nawshad Farruque
University of Alberta
T
Teruaki Hayashi
University of Tokyo
N
Nafisa Sadaf Hriti
University of Alberta
M
Md Abed Rahman
University of Alberta
I
Iain Smith
University of Alberta
M
Mi-Young Kim
University of Alberta
O
Osmar R. Zaïane
University of Alberta
Randy Goebel
Randy Goebel
Professor of Computing Science, University of Alberta
artificial intelligencelogical reasoningvisualizationmachine learningnatural language processing