Logic-Oriented Retriever Enhancement via Contrastive Learning

πŸ“… 2026-02-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing retrievers that rely on surface-level similarity when handling complex logical queries, which constrains the performance of large language models in knowledge-intensive tasks. To overcome this, the authors propose LORE, a method that leverages fine-grained contrastive learning to activate the model’s inherent logical reasoning capabilities, steering embedding representations to align with deep logical structures rather than shallow semantic cues. LORE requires no external supervision, additional resources, or pre-retrieval analysis, while preserving index compatibility and computational efficiency. Experimental results demonstrate that LORE significantly improves both retrieval accuracy and downstream generation quality across multiple knowledge-intensive benchmarks. The code and datasets are publicly released.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) struggle in knowledge-intensive tasks, as retrievers often overfit to surface similarity and fail on queries involving complex logical relations. The capacity for logical analysis is inherent in model representations but remains underutilized in standard training. LORE (Logic ORiented Retriever Enhancement) introduces fine-grained contrastive learning to activate this latent capacity, guiding embeddings toward evidence aligned with logical structure rather than shallow similarity. LORE requires no external upervision, resources, or pre-retrieval analysis, remains index-compatible, and consistently improves retrieval utility and downstream generation while maintaining efficiency. The datasets and code are publicly available at https://github.com/mazehart/Lore-RAG.
Problem

Research questions and friction points this paper is trying to address.

retrieval
logical reasoning
knowledge-intensive tasks
large language models
contrastive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

contrastive learning
logic-oriented retrieval
retriever enhancement
large language models
knowledge-intensive tasks
πŸ”Ž Similar Papers
No similar papers found.
W
Wenxuan Zhang
Shanghai Institute of Artificial Intelligence for Education, East China Normal University, China
Y
Yuan-Hao Jiang
Shanghai Institute of Artificial Intelligence for Education, East China Normal University, China
Changyong Qi
Changyong Qi
East China Normal University
R
Rui Jia
Shanghai Institute of Artificial Intelligence for Education, East China Normal University, China
Y
Yonghe Wu
Education Technology, East China Normal University, China