🤖 AI Summary
Multimodal large language models (MLLMs) face an inherent tension between factual accuracy and creative generation, and existing approaches lack flexible, training-free control over associative reasoning strength. To address this, we propose a lightweight, inference-time framework for controllable associative reasoning—grounded in intermediate-layer representation analysis, which reveals their pivotal role in associative behavior. Our method introduces a hallucination-guided directional encoding mechanism, integrating high-relevance instance selection, adaptive strength calibration, and task-specific vector injection to enable interpretable, multi-dimensional associative control. Evaluated on Creation-MMBench, our approach boosts creativity by 5.8×; on CHARR, it reduces hallucination rates by 29%, significantly outperforming state-of-the-art methods. Crucially, it achieves the first fine-grained, plug-and-play adjustment of associative strength across diverse task scenarios—without architectural modification or parameter updates.
📝 Abstract
Multimodal large language models (MLLMs) face an inherent trade-off between faithfulness and creativity, as different tasks require varying degrees of associative reasoning. However, existing methods lack the flexibility to modulate this reasoning strength, limiting MLLMs' adaptability across factual and creative scenarios. To bridge this gap, we propose equipping MLLMs with mechanisms that enable flexible control over associative reasoning. We begin by investigating the internal mechanisms underlying associative behavior in MLLMs and find that: (1) middle layers play a pivotal role in shaping model's associative tendencies, (2) modifying representations in these layers effectively regulates associative reasoning strength, and (3) hallucinations can be exploited to derive steering vectors that guide this modulation. Building on these findings, we introduce Flexible Association Control (FlexAC), a lightweight and training-free framework for modulating associative behavior in MLLMs. FlexAC first induces hallucination-guided intermediate representations to encode associative directions. Then, it selects high-association instances to construct effective associative steering vectors, whose strengths are adaptively calibrated to balance creative guidance with output stability. Finally, recognizing the multi-dimensional nature of associative reasoning, FlexAC incorporates task-specific associative vectors derived from a forward pass on a few target-domain samples, enabling models to follow diverse associative directions and better adapt to creative tasks. Notably, our method achieves up to a 5.8x improvement in creativity on Creation-MMBench and a 29% reduction in hallucination rate on CHAIR, surpassing existing baselines and demonstrating its effectiveness in enabling flexible control over associative reasoning in MLLMs. Our code is available at https://github.com/ylhz/FlexAC.