Adaptive Task Vectors for Large Language Models

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing in-context learning (ICL) methods suffer from sensitivity to demonstration ordering and strict context-length constraints; meanwhile, task-vector approaches compress task information into fixed vectors, lacking input awareness and thus exhibiting poor generalization to unseen tasks. To address these limitations, we propose a **query-conditioned dynamic task vector generation mechanism**: a small language model adaptively generates input-dependent task vectors, which are then mapped via architecture-aligned projection and injected into the feed-forward layers of a large language model (LLM). We theoretically prove that this method achieves representational capacity equivalent to LoRA and strictly superior to Prefix-Tuning. Empirical results demonstrate substantial improvements in ICL robustness and cross-task generalization: our approach outperforms both fixed task-vector baselines and standard ICL on multi-task benchmarks, and further enables effective zero-shot transfer to novel tasks.

Technology Category

Application Category

📝 Abstract
In-Context Learning (ICL) enables Large Language Models (LLMs) to perform tasks without parameter updates by conditioning on a few demonstrations provided in the prompt. Despite its success, ICL suffers from several limitations, including sensitivity to demonstration order, context length constraints, and computational inefficiency. To address these challenges, task vector-based approaches compress task information into a single vector. However, these methods typically construct task vectors from fixed sets of demonstrations and reuse them across input queries, without conditioning on the specific input. This limitation can lead models to struggle with effective adaptation when the input query is not well aligned with the underlying demonstrations, consequently degrading their generalization performance on unseen tasks. To overcome this limitation, we propose Adaptive Task Vectors (ATV), a simple and effective framework that dynamically generates task vectors conditioned on each input query. ATV employs a small language model to generate task vectors, which are then transformed to match the target LLM's architecture and applied to guide its output generation. In contrast to ICL and previous vector-based approaches, which rely on fixed demonstration sets and their corresponding vectors, ATV dynamically generates task vectors tailored to each specific input query and task. Consequently, ATV demonstrates strong performance and generalization capabilities, even for unseen tasks. Furthermore, we provide a theoretical analysis indicating that ATV is expressively equivalent to LoRA under equal rank budgets and more expressive than Prefix-Tuning, thereby offering formal support for its representational advantage.
Problem

Research questions and friction points this paper is trying to address.

ICL suffers from sensitivity to demonstration order and context constraints
Fixed task vectors degrade generalization on misaligned input queries
Adaptive Task Vectors dynamically generate input-specific task guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic task vectors for each input query
Small model generates adaptive task vectors
Expressive equivalence to LoRA, surpassing Prefix-Tuning
🔎 Similar Papers
No similar papers found.
J
Joonseong Kang
Yonsei University
S
Soojeong Lee
Yonsei University
Subeen Park
Subeen Park
Yonsei University
S
Sumin Park
Yonsei University
T
Taero Kim
Yonsei University
Jihee Kim
Jihee Kim
Korea Advanced Institute of Science and Technology, KAIST
MacroeconomicsEconomic Growth
R
Ryunyi Lee
Yonsei University
Kyungwoo Song
Kyungwoo Song
Yonsei University
Machine LearningDeep LearningNeural Networks