Towards Transparent AI: A Survey on Explainable Language Models

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The black-box nature of language models (LMs) severely hinders their trustworthy deployment in high-stakes applications, while existing eXplainable AI (XAI) methods struggle to accommodate LM-specific architectural complexity and generalization capabilities—and lack a systematic survey aligned with LM architectural evolution. This work introduces the first taxonomy of XAI methods categorized by Transformer architecture type: encoder-only, decoder-only, and encoder-decoder. We propose a dual-dimensional evaluation framework jointly assessing *reasoning plausibility* and *explanation faithfulness*. Through comprehensive literature review and comparative analysis, we systematically evaluate the applicability and limitations of attention visualization, feature attribution, and reasoning path decomposition techniques. Our study establishes a structured XAI classification scheme, uncovers critical impacts of architectural differences on explanation quality, and identifies key research directions toward trustworthy, interpretable LMs.

Technology Category

Application Category

📝 Abstract
Language Models (LMs) have significantly advanced natural language processing and enabled remarkable progress across diverse domains, yet their black-box nature raises critical concerns about the interpretability of their internal mechanisms and decision-making processes. This lack of transparency is particularly problematic for adoption in high-stakes domains, where stakeholders need to understand the rationale behind model outputs to ensure accountability. On the other hand, while explainable artificial intelligence (XAI) methods have been well studied for non-LMs, they face many limitations when applied to LMs due to their complex architectures, considerable training corpora, and broad generalization abilities. Although various surveys have examined XAI in the context of LMs, they often fail to capture the distinct challenges arising from the architectural diversity and evolving capabilities of these models. To bridge this gap, this survey presents a comprehensive review of XAI techniques with a particular emphasis on LMs, organizing them according to their underlying transformer architectures: encoder-only, decoder-only, and encoder-decoder, and analyzing how methods are adapted to each while assessing their respective strengths and limitations. Furthermore, we evaluate these techniques through the dual lenses of plausibility and faithfulness, offering a structured perspective on their effectiveness. Finally, we identify open research challenges and outline promising future directions, aiming to guide ongoing efforts toward the development of robust, transparent, and interpretable XAI methods for LMs.
Problem

Research questions and friction points this paper is trying to address.

Addressing interpretability challenges in black-box language models
Surveying explainable AI methods for diverse transformer architectures
Evaluating XAI techniques through plausibility and faithfulness metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveying explainable AI techniques for language models
Organizing methods by transformer architecture types
Evaluating techniques via plausibility and faithfulness metrics
🔎 Similar Papers
No similar papers found.
Avash Palikhe
Avash Palikhe
Florida International University
Algorithmic FairnessExplainabilityEthical AI
Zichong Wang
Zichong Wang
Florida International University
Trustworthy MLCausal InferenceGraph MiningAlgorithmic Fairness
Zhipeng Yin
Zhipeng Yin
Florida International University
Trustworthy AIAlgorithmic fairnessCopyrightMachine learning
R
Rui Guo
University of Florida, Gainesville, United States.
Q
Qiang Duan
Pennsylvania State University, Montgomery County, United States.
J
Jie Yang
University of Wollongong, Wollongong, Australia.
W
Wenbin Zhang
Florida International University, Miami, United States.