Boosting Explainability through Selective Rationalization in Pre-trained Language Models

๐Ÿ“… 2025-01-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Pretrained language models (PLMs) often yield suboptimal selective rationalization due to token homogenization, leading to unfaithful and implausible explanations. This work identifies token homogenization as the core cause of rationalization degradationโ€”its first formal characterization in the literature. We propose PLMR, a novel decoupled architecture that separates the PLM into a generator and a predictor: the generator employs gradient-guided pruning to eliminate irrelevant tokens, producing highly readable rationales; the predictor operates on the full input to preserve prediction accuracy. PLMR supports modular training, end-to-end joint optimization, and seamless integration with diverse PLMs. Evaluated on two benchmark datasets, PLMR achieves substantial improvements in rationale faithfulness (+18.7%), sufficiency (+22.3%), and human readability, while maintaining original model performance without degradation.

Technology Category

Application Category

๐Ÿ“ Abstract
The widespread application of pre-trained language models (PLMs) in natural language processing (NLP) has led to increasing concerns about their explainability. Selective rationalization is a self-explanatory framework that selects human-intelligible input subsets as rationales for predictions. Recent studies have shown that applying existing rationalization frameworks to PLMs will result in severe degeneration and failure problems, producing sub-optimal or meaningless rationales. Such failures severely damage trust in rationalization methods and constrain the application of rationalization techniques on PLMs. In this paper, we find that the homogeneity of tokens in the sentences produced by PLMs is the primary contributor to these problems. To address these challenges, we propose a method named Pre-trained Language Model's Rationalization (PLMR), which splits PLMs into a generator and a predictor to deal with NLP tasks while providing interpretable rationales. The generator in PLMR also alleviates homogeneity by pruning irrelevant tokens, while the predictor uses full-text information to standardize predictions. Experiments conducted on two widely used datasets across multiple PLMs demonstrate the effectiveness of the proposed method PLMR in addressing the challenge of applying selective rationalization to PLMs. Codes: https://github.com/ylb777/PLMR.
Problem

Research questions and friction points this paper is trying to address.

Pre-trained Language Models
Transparency
Interpretability Issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained Language Model
Selective Explanation
Enhanced Reliability
๐Ÿ”Ž Similar Papers
L
Libing Yuan
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
S
Shuaibo Hu
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
Kui Yu
Kui Yu
Professor, Hefei University of Technology
Causal discovery and Data mining
Le Wu
Le Wu
Hefei University of Technology
recommender systemsuser modelingexplainabilty and fairness in recommendation