ReasonCACHE: Teaching LLMs To Reason Without Weight Updates

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models struggle to perform complex reasoning via conventional in-context learning without weight updates, due to limitations in context length, attention overhead, and representational capacity. This work proposes ReasonCACHE, a method that distills reasoning examples into trainable, fixed key-value caches through prefix tuning and directly injects them into Transformer attention layers. Without modifying model weights or extending the context window, ReasonCACHE substantially enhances reasoning performance—surpassing standard in-context learning for the first time under zero weight-update conditions and matching or exceeding the effectiveness of full fine-tuning. Its efficacy is validated on challenging benchmarks such as GPQA-Diamond, while offering significant advantages in data efficiency, inference cost, and the scale of trainable parameters.

Technology Category

Application Category

📝 Abstract
Can Large language models (LLMs) learn to reason without any weight update and only through in-context learning (ICL)? ICL is strikingly sample-efficient, often learning from only a handful of demonstrations, but complex reasoning tasks typically demand many training examples to learn from. However, naively scaling ICL by adding more demonstrations breaks down at this scale: attention costs grow quadratically, performance saturates or degrades with longer contexts, and the approach remains a shallow form of learning. Due to these limitations, practitioners predominantly rely on in-weight learning (IWL) to induce reasoning. In this work, we show that by using Prefix Tuning, LLMs can learn to reason without overloading the context window and without any weight updates. We introduce $\textbf{ReasonCACHE}$, an instantiation of this mechanism that distills demonstrations into a fixed key-value cache. Empirically, across challenging reasoning benchmarks, including GPQA-Diamond, ReasonCACHE outperforms standard ICL and matches or surpasses IWL approaches. Further, it achieves this all while being more efficient across three key axes: data, inference cost, and trainable parameters. We also theoretically prove that ReasonCACHE can be strictly more expressive than low-rank weight update since the latter ties expressivity to input rank, whereas ReasonCACHE bypasses this constraint by directly injecting key-values into the attention mechanism. Together, our findings identify ReasonCACHE as a middle path between in-context and in-weight learning, providing a scalable algorithm for learning reasoning skills beyond the context window without modifying parameters. Our project page: https://reasoncache.github.io/
Problem

Research questions and friction points this paper is trying to address.

in-context learning
reasoning
large language models
weight updates
context window
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReasonCACHE
in-context learning
prefix tuning
key-value cache
reasoning without weight updates
🔎 Similar Papers
No similar papers found.