Explaining Black-box Language Models with Knowledge Probing Systems: A Post-hoc Explanation Perspective

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While pre-trained language models (PLMs) exhibit strong reasoning capabilities, their comprehension of implicit knowledge—such as commonsense, causality, and analogy—remains opaque, making it difficult to distinguish superficial pattern matching from deep semantic understanding. Method: We propose KnowProb, a knowledge-guided post-hoc probing framework, introducing the first six-dimensional interpretability evaluation framework that jointly assesses knowledge-aware understanding and relational reasoning. KnowProb integrates implicit knowledge modeling, relational inference analysis, and distributional discrepancy diagnosis to systematically uncover model blind spots. Contribution/Results: Experiments reveal substantial capability gaps across mainstream PLMs on diverse implicit knowledge tasks. KnowProb accurately identifies these limitations and significantly enhances the fidelity and granularity of semantic understanding assessment. By bridging knowledge representation with interpretability probing, it establishes a novel paradigm for trustworthy AI evaluation.

Technology Category

Application Category

📝 Abstract
Pre-trained Language Models (PLMs) are trained on large amounts of unlabeled data, yet they exhibit remarkable reasoning skills. However, the trustworthiness challenges posed by these black-box models have become increasingly evident in recent years. To alleviate this problem, this paper proposes a novel Knowledge-guided Probing approach called KnowProb in a post-hoc explanation way, which aims to probe whether black-box PLMs understand implicit knowledge beyond the given text, rather than focusing only on the surface level content of the text. We provide six potential explanations derived from the underlying content of the given text, including three knowledge-based understanding and three association-based reasoning. In experiments, we validate that current small-scale (or large-scale) PLMs only learn a single distribution of representation, and still face significant challenges in capturing the hidden knowledge behind a given text. Furthermore, we demonstrate that our proposed approach is effective for identifying the limitations of existing black-box models from multiple probing perspectives, which facilitates researchers to promote the study of detecting black-box models in an explainable way.
Problem

Research questions and friction points this paper is trying to address.

Explaining black-box language models' internal reasoning processes
Probing implicit knowledge understanding beyond surface text
Identifying limitations in current models' hidden knowledge capture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge-guided probing for post-hoc explanations
Probing implicit knowledge beyond surface text
Multiple perspectives for identifying model limitations
🔎 Similar Papers
No similar papers found.
Y
Yunxiao Zhao
School of Computer and Information Technology, Shanxi University, China
H
Hao Xu
School of Computer and Information Technology, Shanxi University, China
Z
Zhiqiang Wang
School of Computer and Information Technology, Shanxi University, China; Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, China
X
Xiaoli Li
Institute for Infocomm Research, A*Star, Singapore
Jiye Liang
Jiye Liang
Shanxi University
Ru Li
Ru Li
Harbin Institute of Technology