BiMind: A Dual-Head Reasoning Model with Attention-Geometry Adapter for Incorrect Information Detection

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing misinformation detection methods struggle to simultaneously perform content verification and knowledge correction under attention geometry collapse. To overcome this limitation, the authors propose BiMind, a dual-headed reasoning framework that decouples intra-content reasoning from knowledge-augmented reasoning. The framework mitigates geometric collapse via an attention geometry adapter, constructs domain-specific semantic memory through a self-retrieval kNN mechanism, and dynamically integrates information from both reasoning paths using an uncertainty-aware fusion strategy—incorporating entropy gating and symmetric KL regularization. Furthermore, a novel Value-of-eXperience metric is introduced to quantify the contribution of external knowledge, significantly enhancing both detection performance and model interpretability. Experiments demonstrate that BiMind outperforms state-of-the-art models across multiple public benchmarks and provides insights into the critical timing and mechanisms of knowledge intervention.
📝 Abstract
Incorrect information poses significant challenges by disrupting content veracity and integrity, yet most detection approaches struggle to jointly balance textual content verification with external knowledge modification under collapsed attention geometries. To address this issue, we propose a dual-head reasoning framework, BiMind, which disentangles content-internal reasoning from knowledge-augmented reasoning. In BiMind, we introduce three core innovations: (i) an attention geometry adapter that reshapes attention logits via token-conditioned offsets and mitigates attention collapse; (ii) a self-retrieval knowledge mechanism, which constructs an in-domain semantic memory through kNN retrieval and injects retrieved neighbors via feature-wise linear modulation; (iii) the uncertainty-aware fusion strategies, including entropy-gated fusion and a trainable agreement head, stabilized by a symmetric Kullback-Leibler agreement regularizer. To quantify the knowledge contributions, we define a novel metric, Value-of-eXperience (VoX), to measure instance-wise logit gains from knowledge-augmented reasoning. Experiment results on public datasets demonstrate that our BiMind model outperforms advanced detection approaches and provides interpretable diagnostics on when and why knowledge matters.
Problem

Research questions and friction points this paper is trying to address.

incorrect information detection
attention collapse
knowledge-augmented reasoning
content verification
attention geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention geometry adapter
self-retrieval knowledge mechanism
uncertainty-aware fusion
Value-of-eXperience
dual-head reasoning
🔎 Similar Papers
No similar papers found.