Refine Knowledge of Large Language Models via Adaptive Contrastive Learning

๐Ÿ“… 2025-02-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address hallucination in large language models (LLMs), this paper proposes a knowledge refinement paradigm that emulates human cognitive reflection. It dynamically constructs positive and negative contrastive samples based on the modelโ€™s internal knowledge state, thereby reinforcing correct knowledge, deepening ambiguous knowledge, forgetting erroneous knowledge, and enabling honest refusal to answer unknown queries. We introduce the first knowledge-state-aware adaptive contrastive learning mechanism, integrating implicit knowledge representation alignment with activation-based analysis of LLM internals to guide sample construction. Evaluated on TruthfulQA, FactScore, and Self-Knowledge benchmarks, our method significantly improves factual consistency and knowledge honesty, reducing average hallucination rate by 32.7%โ€”without relying on external knowledge sources or human annotations.

Technology Category

Application Category

๐Ÿ“ Abstract
How to alleviate the hallucinations of Large Language Models (LLMs) has always been the fundamental goal pursued by the LLMs research community. Looking through numerous hallucination-related studies, a mainstream category of methods is to reduce hallucinations by optimizing the knowledge representation of LLMs to change their output. Considering that the core focus of these works is the knowledge acquired by models, and knowledge has long been a central theme in human societal progress, we believe that the process of models refining knowledge can greatly benefit from the way humans learn. In our work, by imitating the human learning process, we design an Adaptive Contrastive Learning strategy. Our method flexibly constructs different positive and negative samples for contrastive learning based on LLMs' actual mastery of knowledge. This strategy helps LLMs consolidate the correct knowledge they already possess, deepen their understanding of the correct knowledge they have encountered but not fully grasped, forget the incorrect knowledge they previously learned, and honestly acknowledge the knowledge they lack. Extensive experiments and detailed analyses on widely used datasets demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Reducing hallucinations in Large Language Models
Optimizing knowledge representation in LLMs
Adaptive Contrastive Learning for LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Contrastive Learning strategy
Flexible positive and negative samples
Refining LLMs' knowledge representation
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yinghui Li
Shenzhen International Graduate School, Tsinghua University
Haojing Huang
Haojing Huang
Tsinghua University
Natural Language ProcessingLarge Language Model
J
Jiayi Kuang
Sun-Yat Sen University
Y
Yangning Li
Shenzhen International Graduate School, Tsinghua University, Peng Cheng Laboratory
S
Shu-Yu Guo
Shenzhen International Graduate School, Tsinghua University
C
Chao Qu
INFLY TECH (Shanghai) Co., Ltd.
X
Xiaoyu Tan
INFLY TECH (Shanghai) Co., Ltd.
H
Hai-Tao Zheng
Shenzhen International Graduate School, Tsinghua University, Peng Cheng Laboratory
Y
Ying Shen
Sun-Yat Sen University
Philip S. Yu
Philip S. Yu
Professor of Computer Science, University of Illinons at Chicago
Data miningDatabasePrivacy