🤖 AI Summary
Large language models (LLMs) struggle to reconcile conflicting reviewer opinions in peer review and are prone to cognitive biases—such as anchoring and conformity—that undermine review reliability and interpretability.
Method: This paper proposes the Cognitive Alignment Framework (CAF), grounded in Kahneman’s dual-process theory, which uniquely integrates System 1’s intuitive reasoning with System 2’s logical reflection for meta-review generation. CAF employs a three-stage, dual-path inference process—review initialization, incremental integration, and cognitive alignment—to enable conflict-aware opinion synthesis and bias mitigation.
Contribution/Results: Evaluated across multi-dimensional consistency metrics, CAF achieves up to a 19.47% improvement in sentiment consistency and a 12.95% gain in content consistency, significantly outperforming state-of-the-art baselines. This work establishes a novel paradigm for high-fidelity, interpretable, and cognitively grounded automation of academic peer review.
📝 Abstract
The rapid growth of scholarly submissions has overwhelmed traditional peer review systems, driving the need for intelligent automation to preserve scientific rigor. While large language models (LLMs) show promise in automating manuscript critiques, their ability to synthesize high-stakes meta-reviews, which require conflict-aware reasoning and consensus derivation, remains underdeveloped. Existing methods fail to effectively handle conflicting viewpoints within differing opinions, and often introduce additional cognitive biases, such as anchoring effects and conformity bias.To overcome these limitations, we propose the Cognitive Alignment Framework (CAF), a dual-process architecture that transforms LLMs into adaptive scientific arbitrators. By operationalizing Kahneman's dual-process theory, CAF introduces a three-step cognitive pipeline: review initialization, incremental integration, and cognitive alignment.Empirical validation shows that CAF outperforms existing LLM-based methods, with sentiment consistency gains reaching up to 19.47% and content consistency improving by as much as 12.95%.