🤖 AI Summary
This work addresses the challenge of temporal credit assignment in multi-turn memory updating during long-context chunk processing, where the absence of fine-grained supervision hinders effective learning. To overcome this, the authors propose a self-supervised reward reshaping mechanism that leverages relevant documents as teacher signals to align model inputs across turns and assigns normalized probabilistic rewards—eliminating the need for additional annotations or costly external critics. This approach provides fine-grained feedback for memory updates within a multi-turn reinforcement learning framework. Extensive experiments across seven long-context benchmarks demonstrate consistent and significant improvements over strong baselines across models of varying scales, confirming both the effectiveness and generalizability of the proposed method.
📝 Abstract
The rapid progress of large language models (LLMs) has led to remarkable performance gains across a wide range of tasks. However, when handling long documents that exceed the model's context window limit, the entire context cannot be processed in a single pass, making chunk-wise processing necessary. This requires multiple turns to read different chunks and update memory. However, supervision is typically provided only by the final outcome, which makes it difficult to evaluate the quality of memory updates at each turn in the multi-turn training setting. This introduces a temporal credit assignment challenge. Existing approaches, such as LLM-as-a-judge or process reward models, incur substantial computational overhead and suffer from estimation noise. To better address the credit assignment problem in multi-turn memory training, we propose Teacher-Aligned Reward Reshaping for Multi-Turn Reinforcement Learning (TAMTRL). TAMTRL leverages relevant documents as teacher signals by aligning them with each turn of model input and assigns rewards through normalized probabilities in a self-supervised manner. This provides fine-grained learning signals for each memory update and improves long-context processing. Experiments with multiple models of varying scales across seven long-context benchmarks show that TAMTRL consistently outperforms strong baselines, demonstrating its effectiveness. Our code is available at https://anonymous.4open.science/r/TAMTRL-F1F8.