Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning

📅 2024-07-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continual knowledge learning (CKL) for large language models suffers from severe catastrophic forgetting, inefficient training due to uniform token-level gradient weighting (causing redundant parameter updates), and a lack of benchmarks explicitly evaluating the learning–retention trade-off. Method: We propose Train-Attention Augmented Language Models (TAALM), introducing the first token-level dynamic attention weighting mechanism during training, driven by a meta-learned importance prediction network that adaptively assigns gradient weights for precise knowledge updates. Additionally, we construct LAMA-ckl—the first CKL benchmark explicitly designed to quantify the learning–retention trade-off. Results: TAALM achieves state-of-the-art performance on both existing and newly introduced CKL benchmarks, is compatible with mainstream CKL methods, and significantly improves learning efficiency and long-term memory retention.

Technology Category

Application Category

📝 Abstract
Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting. However, these methods naively inherit the inefficiencies of standard training procedures, indiscriminately applying uniform weight across all tokens, which can lead to unnecessary parameter updates and increased forgetting. To address these shortcomings, we propose a novel CKL approach termed Train-Attention-Augmented Language Model (TAALM), which enhances learning efficiency by dynamically predicting and applying weights to tokens based on their usefulness. This method employs a meta-learning framework that optimizes token importance predictions, facilitating targeted knowledge updates and minimizing forgetting. Also, we observe that existing benchmarks do not clearly exhibit the trade-off between learning and retaining, therefore we propose a new benchmark, extsc{LAMA-ckl}, to address this issue. Through experiments conducted on both newly introduced and established CKL benchmarks, TAALM proves the state-of-the-art performance upon the baselines, and also shows synergistic compatibility when integrated with previous CKL approaches.
Problem

Research questions and friction points this paper is trying to address.

Optimize token weight dynamically in CKL.
Minimize catastrophic forgetting in LLMs.
Propose new benchmark for learning retention.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token weight prediction
Meta-learning framework optimization
Enhanced continual knowledge learning