TAMTRL: Teacher-Aligned Reward Reshaping for Multi-Turn Reinforcement Learning in Long-Context Compression

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of temporal credit assignment in multi-turn memory updating during long-context chunk processing, where the absence of fine-grained supervision hinders effective learning. To overcome this, the authors propose a self-supervised reward reshaping mechanism that leverages relevant documents as teacher signals to align model inputs across turns and assigns normalized probabilistic rewards—eliminating the need for additional annotations or costly external critics. This approach provides fine-grained feedback for memory updates within a multi-turn reinforcement learning framework. Extensive experiments across seven long-context benchmarks demonstrate consistent and significant improvements over strong baselines across models of varying scales, confirming both the effectiveness and generalizability of the proposed method.

Technology Category

Application Category

📝 Abstract
The rapid progress of large language models (LLMs) has led to remarkable performance gains across a wide range of tasks. However, when handling long documents that exceed the model's context window limit, the entire context cannot be processed in a single pass, making chunk-wise processing necessary. This requires multiple turns to read different chunks and update memory. However, supervision is typically provided only by the final outcome, which makes it difficult to evaluate the quality of memory updates at each turn in the multi-turn training setting. This introduces a temporal credit assignment challenge. Existing approaches, such as LLM-as-a-judge or process reward models, incur substantial computational overhead and suffer from estimation noise. To better address the credit assignment problem in multi-turn memory training, we propose Teacher-Aligned Reward Reshaping for Multi-Turn Reinforcement Learning (TAMTRL). TAMTRL leverages relevant documents as teacher signals by aligning them with each turn of model input and assigns rewards through normalized probabilities in a self-supervised manner. This provides fine-grained learning signals for each memory update and improves long-context processing. Experiments with multiple models of varying scales across seven long-context benchmarks show that TAMTRL consistently outperforms strong baselines, demonstrating its effectiveness. Our code is available at https://anonymous.4open.science/r/TAMTRL-F1F8.
Problem

Research questions and friction points this paper is trying to address.

long-context compression
multi-turn reinforcement learning
temporal credit assignment
memory update
context window limitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward reshaping
multi-turn reinforcement learning
long-context compression
teacher-aligned signals
temporal credit assignment
🔎 Similar Papers
No similar papers found.
Li Wang
Li Wang
Beihang University
MARLLLM
Yandong Wang
Yandong Wang
Citadel Securities
Big DataNoSQL storesMachine LearningHigh-Frequency Trading Systems
X
Xin Yu
School of Artificial Intelligence, Beihang University, Beijing, 100191, China
K
Kui Zhang
School of Artificial Intelligence, Beihang University, Beijing, 100191, China
Tianhao Peng
Tianhao Peng
Beihang University, PhD candidate
Large Language ModelGraph Data Mining
W
Wenjun Wu
School of Artificial Intelligence, Beihang University, Beijing, 100191, China; Hangzhou International Innovation Institute, Beihang University, Hangzhou, China; Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Beihang University, Beijing, China