Beyond the Linear Separability Ceiling

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a “Linear Separability Ceiling” (LSC) phenomenon in vision-language models (VLMs) during abstract reasoning: although visual embeddings are discriminative, linear classifiers trained on them saturate in performance—not due to inadequate perceptual representations, but because of misalignment between the language model’s reasoning paths and visual semantics. To address this, the authors propose a suffix-tuning–based intervention framework, revealing dormant reasoning paths in VLMs that remain inactive under standard inference. They demonstrate that semantic tasks benefit from path activation alone, whereas complex relational reasoning requires fine-tuning of core model weights. Experiments show that targeted path alignment substantially improves reasoning performance; however, excessive adaptation harms cross-prompt generalization, exposing a fundamental trade-off between representation quality and reasoning-path alignment.

Technology Category

Application Category

📝 Abstract
Most state-of-the-art Visual-Language Models (VLMs) are seemingly limited by the linear separabilty of their visual embeddings on abstract reasoning tasks. This work investigates this "linear reasoning bottleneck" by introducing the Linear Separability Ceiling (LSC), the performance of a simple linear classifier on a VLM's visual embeddings. We find this bottleneck is widespread and stems not from poor perception, but from failures in the language model's reasoning pathways. We demonstrate this is a solvable alignment issue. The required intervention, however, is task-dependent: activating existing pathways suffices for semantic concepts, while complex relational reasoning requires adapting core model weights. Using postfix tuning as a methodological control, we find strong evidence for powerful, dormant reasoning pathways within VLMs. However, for complex relational tasks requiring deeper adaptation, explicitly improving representation quality causes the model to fail on new prompt formats despite its embeddings remaining well separated. Ultimately, this work provides a new lens for VLM analysis, showing that robust reasoning is a matter of targeted alignment, not simply improved representation learning.
Problem

Research questions and friction points this paper is trying to address.

Investigates linear reasoning bottleneck in VLMs
Identifies task-dependent alignment solutions for VLMs
Reveals dormant reasoning pathways in VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Linear Separability Ceiling (LSC) metric
Uses postfix tuning to activate dormant pathways
Adapts core weights for complex relational tasks
🔎 Similar Papers
No similar papers found.