🤖 AI Summary
Existing post-hoc explanation methods for black-box models suffer from trade-offs between faithfulness and human interpretability; while B-cos networks improve intrinsic interpretability, prior work has been confined to computer vision. This paper presents the first extension of the B-cos architecture to natural language processing, proposing a plug-and-play, training-free structural transformation paradigm for pretrained language models, augmented with task-adaptive fine-tuning to preserve performance while enhancing intrinsic interpretability. Our method leverages cosine-similarity-based forward propagation and pretrained weight mapping, revealing fundamental differences from standard fine-tuning in learning dynamics and attribution patterns. Experiments across multiple NLP benchmarks demonstrate significantly higher explanation faithfulness than mainstream post-hoc methods (e.g., Integrated Gradients), a 32% improvement in human evaluation scores, and task accuracy on par with standard fine-tuning. The code is publicly available.
📝 Abstract
Post-hoc explanation methods for black-box models often struggle with faithfulness and human interpretability due to the lack of explainability in current neural models. Meanwhile, B-cos networks have been introduced to improve model explainability through architectural and computational adaptations, but their application has so far been limited to computer vision models and their associated training pipelines. In this work, we introduce B-cos LMs, i.e., B-cos networks empowered for NLP tasks. Our approach directly transforms pre-trained language models into B-cos LMs by combining B-cos conversion and task fine-tuning, improving efficiency compared to previous B-cos methods. Our automatic and human evaluation results demonstrate that B-cos LMs produce more faithful and human interpretable explanations than post hoc methods, while maintaining task performance comparable to conventional fine-tuning. Our in-depth analysis explores how B-cos LMs differ from conventionally fine-tuned models in their learning processes and explanation patterns. Finally, we provide practical guidelines for effectively building B-cos LMs based on our findings. Our code is available at https://anonymous.4open.science/r/bcos_lm.