PLEX: Perturbation-free Local Explanations for LLM-Based Text Classification

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Poor interpretability of large language models (LLMs) in text classification and the prohibitively high computational cost of existing local explanation methods (e.g., LIME, SHAP)—which rely on extensive input perturbations—motivate this work. We propose PLEX, a perturbation-free, efficient local explanation method for LLMs. PLEX leverages contextual embeddings from the target LLM and employs a twin-network architecture trained via feature alignment to learn token-level importance in a single forward pass, eliminating repeated inference. To our knowledge, PLEX is the first method enabling perturbation-free local explanation for LLMs. Evaluated on four text classification tasks, PLEX achieves over 92% explanation agreement with LIME and SHAP, accelerates explanation generation by approximately 100×, reduces computational overhead by roughly 10,000×, and matches or exceeds their keyword identification accuracy.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in text classification, but their complexity hinders interpretability, making it difficult to understand the reasoning behind their predictions. Explainable AI (XAI) methods like LIME and SHAP offer local explanations by identifying influential words, but they rely on computationally expensive perturbations. These methods typically generate thousands of perturbed sentences and perform inferences on each, incurring a substantial computational burden, especially with LLMs. To address this, we propose underline{P}erturbation-free underline{L}ocal underline{Ex}planation (PLEX), a novel method that leverages the contextual embeddings extracted from the LLM and a ``Siamese network" style neural network trained to align with feature importance scores. This one-off training eliminates the need for subsequent perturbations, enabling efficient explanations for any new sentence. We demonstrate PLEX's effectiveness on four different classification tasks (sentiment, fake news, fake COVID-19 news and depression), showing more than 92% agreement with LIME and SHAP. Our evaluation using a ``stress test" reveals that PLEX accurately identifies influential words, leading to a similar decline in classification accuracy as observed with LIME and SHAP when these words are removed. Notably, in some cases, PLEX demonstrates superior performance in capturing the impact of key features. PLEX dramatically accelerates explanation, reducing time and computational overhead by two and four orders of magnitude, respectively. This work offers a promising solution for explainable LLM-based text classification.
Problem

Research questions and friction points this paper is trying to address.

LLM text classification lacks interpretability due to complexity
Existing XAI methods are computationally expensive with perturbations
Need efficient perturbation-free local explanations for LLM predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses contextual embeddings from LLM
Employs Siamese network for alignment
Eliminates need for perturbation sampling
Y
Yogachandran Rahulamathavan
Institute for Digital Technologies, Loughborough University London, London, U.K.
M
Misbah Farooq
Institute for Digital Technologies, Loughborough University London, London, U.K.
Varuna De Silva
Varuna De Silva
Loughborough University, London
Multi-agent LearningSensor data fusionDeep representation learningVideo processing