Bridging the Know-Act Gap via Task-Level Autoregressive Reasoning

๐Ÿ“… 2026-03-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the โ€œknowing-doing gapโ€ in large language models (LLMs)โ€”their tendency to recognize flaws in questions yet still produce seemingly plausible but incorrect answers. To bridge this gap, we propose DeIllusionLLM, a novel approach that explicitly models the decision between verification and answer generation through a task-level autoregressive framework, unifying discriminative and generative capabilities within a single backbone via a self-distillation mechanism. We further introduce FaultyScience, the first large-scale, cross-disciplinary benchmark designed to systematically expose the prevalence of this gap. Experimental results demonstrate that DeIllusionLLM significantly reduces the rate of erroneous responses under natural prompting while preserving general reasoning performance, thereby validating the effectiveness and scalability of our paradigm in mitigating the knowing-doing gap.

Technology Category

Application Category

๐Ÿ“ Abstract
LLMs often generate seemingly valid answers to flawed or ill-posed inputs. This is not due to missing knowledge: under discriminative prompting, the same models can mostly identify such issues, yet fail to reflect this in standard generative responses. This reveals a fundamental know-act gap between discriminative recognition and generative behavior. Prior work largely characterizes this issue in narrow settings, such as math word problems or question answering, with limited focus on how to integrate these two modes. In this work, we present a comprehensive analysis using FaultyScience, a newly constructed large-scale, cross-disciplinary benchmark of faulty scientific questions. We show that the gap is pervasive and stems from token-level autoregression, which entangles task selection (validate vs. answer) with content generation, preventing discriminative knowledge from being utilized. To address this, we propose DeIllusionLLM, a task-level autoregressive framework that explicitly models this decision. Through self-distillation, the model unifies discriminative judgment and generative reasoning within a single backbone. Empirically, DeIllusionLLM substantially reduces answer-despite-error failures under natural prompting while maintaining general reasoning performance, demonstrating that self-distillation is an effective and scalable solution for bridging the discriminative-generative know-act gap
Problem

Research questions and friction points this paper is trying to address.

know-act gap
discriminative-generative gap
faulty inputs
autoregressive reasoning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

know-act gap
task-level autoregression
self-distillation
discriminative-generative alignment
faulty scientific reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jihyun Janice Ahn
Penn State
Ryo Kamoi
Ryo Kamoi
PhD Student, Penn State University
Natural Language ProcessingMachine Learning
B
Berk Atil
Penn State
Renze Lou
Renze Lou
Pennsylvania State University
NLPLarge Language ModelsAI4ScienceZero-shot LearningGenerative AI
W
WonWoo Kang
UIUC
H
Heehyun Park
Penn State
Sarkar Snigdha Sarathi Das
Sarkar Snigdha Sarathi Das
PhD Student at Pennsylvania State University
Natural Language ProcessingMachine Learning
Z
Zhuoyang Zou
Penn State
X
Xiaoxin Lu
Penn State
Yusen Zhang
Yusen Zhang
PhD Student at Penn State University
Natural Language ProcessingMachine Learning
A
Asfahan Shah
Penn State
R
Ridwanul Hasan Tanvir
Penn State
Lingxiao Zhao
Lingxiao Zhao
Mistral AI
Machine LearningGenerative ModelLLMs
H
Hongxi Huang
Penn State
V
Vignesh Venkatesh
Penn State
D
Dianjun Lin
Penn State
H
Hamid Shah
Penn State
W
Wentao Wang
Penn State
Z
Zhanpeng Song
Penn State
J
Joshua Reed Bassin
Penn State
D
Dax Patel
Penn State
I
Ishan Appareddy Agrahar
Penn State
S
Sahil Pardasani
Penn State
X
Xin Dong
Penn State
F
Fatemeh Rahbari
Penn State