SR-LLM: Rethinking the Structured Representation in Large Language Model

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak structural awareness and poor interoperability of large language models (LLMs) in zero-shot reasoning, this paper proposes a dual-path integration framework that synergizes LLMs with Abstract Meaning Representation (AMR)—a structured semantic formalism. The first path employs a training-agnostic natural-language prompting strategy, converting AMR graphs into coherent textual descriptions to mitigate distributional shift. The second path introduces language-modeling–compatible AMR supervision via fine-tuning, leveraging AMR-derived linguistic signals. This work provides the first empirical evidence that structured semantic priors significantly enhance LLM reasoning capabilities. On multi-task benchmarks including PAWS, the method achieves a +3.17% improvement in zero-shot accuracy and a +12.38% gain after fine-tuning. Crucially, it bridges semantic structure and language generation without requiring external code parsers, thereby improving both model interpretability and generalization.

Technology Category

Application Category

📝 Abstract
Structured representations, exemplified by Abstract Meaning Representation (AMR), have long been pivotal in computational linguistics. However, their role remains ambiguous in the Large Language Models (LLMs) era. Initial attempts to integrate structured representation into LLMs via a zero-shot setting yielded inferior performance. We hypothesize that such a decline stems from the structure information being passed into LLMs in a code format unfamiliar to LLMs' training corpora. Consequently, we propose SR-LLM, an innovative framework with two settings to explore a superior way of integrating structured representation with LLMs from training-free and training-dependent perspectives. The former integrates structural information through natural language descriptions in LLM prompts, whereas its counterpart augments the model's inference capability through fine-tuning on linguistically described structured representations. Performance improvements were observed in widely downstream datasets, with particularly notable gains of 3.17% and 12.38% in PAWS. To the best of our knowledge, this work represents the pioneering demonstration that leveraging structural representations can substantially enhance LLMs' inference capability. We hope that our work sheds light and encourages future research to enhance the reasoning and interoperability of LLMs by structure data.
Problem

Research questions and friction points this paper is trying to address.

Integrate structured representation in LLMs
Enhance LLMs inference capability
Explore training-free and training-dependent methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates AMR via natural language
Fine-tunes LLMs for structured data
Enhances LLMs' inference capability significantly
🔎 Similar Papers
No similar papers found.
Jiahuan Zhang
Jiahuan Zhang
Imperial College London
Remote SensingDeep LearningGNSS
T
Tianheng Wang
Westlake University, KMind Technology Co., Ltd.
H
Hanqing Wu
KMind Technology Co., Ltd.
Ziyi Huang
Ziyi Huang
Assistant Professor @ Arizona State University
Trustworthy AI for Health
Y
Yulong Wu
University of Toronto
D
Dongbai Chen
KMind Technology Co., Ltd.
L
Linfeng Song
Tencent AI Lab
Y
Yue Zhang
Westlake University
G
Guozheng Rao
Tianjin University
Kaicheng Yu
Kaicheng Yu
Assistant Professor, Westlake University, PI of Autonomous Intelligence Lab
computer vision3D understandingautonomous perceptionautomatic machine learning