🤖 AI Summary
While pre-trained language models (PLMs) exhibit strong reasoning capabilities, their comprehension of implicit knowledge—such as commonsense, causality, and analogy—remains opaque, making it difficult to distinguish superficial pattern matching from deep semantic understanding.
Method: We propose KnowProb, a knowledge-guided post-hoc probing framework, introducing the first six-dimensional interpretability evaluation framework that jointly assesses knowledge-aware understanding and relational reasoning. KnowProb integrates implicit knowledge modeling, relational inference analysis, and distributional discrepancy diagnosis to systematically uncover model blind spots.
Contribution/Results: Experiments reveal substantial capability gaps across mainstream PLMs on diverse implicit knowledge tasks. KnowProb accurately identifies these limitations and significantly enhances the fidelity and granularity of semantic understanding assessment. By bridging knowledge representation with interpretability probing, it establishes a novel paradigm for trustworthy AI evaluation.
📝 Abstract
Pre-trained Language Models (PLMs) are trained on large amounts of unlabeled data, yet they exhibit remarkable reasoning skills. However, the trustworthiness challenges posed by these black-box models have become increasingly evident in recent years. To alleviate this problem, this paper proposes a novel Knowledge-guided Probing approach called KnowProb in a post-hoc explanation way, which aims to probe whether black-box PLMs understand implicit knowledge beyond the given text, rather than focusing only on the surface level content of the text. We provide six potential explanations derived from the underlying content of the given text, including three knowledge-based understanding and three association-based reasoning. In experiments, we validate that current small-scale (or large-scale) PLMs only learn a single distribution of representation, and still face significant challenges in capturing the hidden knowledge behind a given text. Furthermore, we demonstrate that our proposed approach is effective for identifying the limitations of existing black-box models from multiple probing perspectives, which facilitates researchers to promote the study of detecting black-box models in an explainable way.