You only need 4 extra tokens: Synergistic Test-time Adaptation for LLMs

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation of large language models (LLMs) in domain-specific applications—such as finance, healthcare, and agriculture—caused by train-test distribution shift, this paper proposes SyTTA, a label-free test-time adaptation framework. SyTTA dynamically refines generation strategies during inference by jointly leveraging input-side perplexity and output-side predictive entropy as complementary uncertainty signals. Crucially, it achieves efficient online adaptation with only four additional tokens per query, eliminating reliance on labeled data or task-specific fine-tuning. The framework is architecture-agnostic, supporting both open- and closed-weight LLMs. Empirical evaluation on agricultural question answering demonstrates a >120% improvement in ROUGE-Lsum over baseline methods, underscoring the effectiveness and practicality of unsupervised test-time adaptation for specialized domains.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly deployed in specialized domains such as finance, medicine, and agriculture, where they face significant distribution shifts from their training data. Domain-specific fine-tuning can mitigate this challenge but relies on high-quality labeled data that is expensive and slow to collect in expertise-limited settings. We study label-free test-time adaptation for language models and present SyTTA, an inference-time framework that adapts models on-the-fly without additional supervision. SyTTA couples two complementary uncertainty signals that arise under distribution shift: input-side perplexity, indicating mismatch with domain-specific terminology and patterns, and output-side predictive entropy, indicating diffuse and unstable token probabilities during generation. Across diverse model architectures and domain-specific benchmarks, SyTTA delivers consistent gains. Notably, on agricultural question answering, SyTTA improves Rouge-LSum by over 120% on Qwen-2.5-7B with only 4 extra tokens per query. These results show that effective test-time adaptation for language models is achievable without labeled examples, supporting deployment in label-scarce domains. The code will be made available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution shifts in LLMs for specialized domains without labeled data
Proposes test-time adaptation using input perplexity and output entropy signals
Enhances domain-specific performance with minimal computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses input perplexity and output entropy signals
Adapts models during inference without supervision
Achieves test-time adaptation with minimal extra tokens
🔎 Similar Papers
No similar papers found.