Beyond Learning: A Training-Free Alternative to Model Adaptation

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of existing language models on certain tasks and the high computational cost of mainstream adaptation methods that rely on training. The authors propose a novel paradigm termed “model transplantation,” which identifies task-relevant local functional modules within a source model through activation analysis and directly transfers them to a target model—enabling performance improvement without any training. This approach provides the first empirical evidence that language models possess internally localized task-specific structures, facilitating cross-model capability transfer. Experiments demonstrate that transplanting modules between models across generations or between base and instruction-tuned variants can achieve up to 2.33× the baseline performance, with full recovery (100%) of task performance gaps in some cases.

Technology Category

Application Category

📝 Abstract
Despite the continuous research and evolution of language models, they sometimes underperform previous versions. Existing approaches to overcome these challenges are resource-intensive, highlighting the need for alternatives that enable immediate action. We assume that each language model has a local module inside that is suitable for a specific function. First, this work identifies a set of modules showing consistent and local activation changes under an inference workload through activation-based analysis. Subsequently, we transplant an internal module that is properly activated for a specific task into the target model, leading to immediate and measurable functional changes without additional training or fine-tuning. To experimentally demonstrate the effectiveness of the transplant technique, we quantify the relationship between transplant strength and performance improvement under different conditions for two language models. In the cross-generation setting, we find that transplanting activation-selected modules can substantially improve the underperforming model, reaching up to twice the target baseline and achieving gap-based recovery above 100%. Moreover, in transplant experiments between a base model and its instruction-tuned counterpart, transplantation improves the underperforming model toward the stronger baseline, yielding up to about 2.33 times the target baseline with gap-based recovery reaching up to 100% in the best case. These results show that meaningful capacity transfer can be realized through the implantation of highly localized modules implied by language models. Overall, this work provides empirical evidence for task-localized modularity in language models and presents a new research area: model transplantation.
Problem

Research questions and friction points this paper is trying to address.

model adaptation
language models
training-free
performance degradation
modularity
Innovation

Methods, ideas, or system contributions that make the work stand out.

model transplantation
activation-based analysis
task-localized modularity
training-free adaptation
module transfer
🔎 Similar Papers
No similar papers found.