🤖 AI Summary
Existing reasoning segmentation methods suffer from poor out-of-distribution generalization, opaque reasoning processes, and pervasive “overthinking”—generating unnecessarily long reasoning chains regardless of task difficulty, leading to high computational overhead and unstable output quality. This paper proposes a lightweight reinforcement learning framework (a GRPO variant) that jointly regulates reasoning length based on task difficulty and model uncertainty, enabling on-demand compression of pixel-level reasoning chains. We introduce the first dual-driven dynamic control mechanism guided by difficulty and uncertainty, and establish ReasonSeg-Diff—the first benchmark featuring difficulty annotations and multi-dimensional reasoning quality evaluation. Experiments demonstrate that our method maintains or improves segmentation accuracy while significantly shortening reasoning chains (average compression of 37.2%), reducing computational cost, and enhancing reasoning controllability and interpretability.
📝 Abstract
Existing reasoning segmentation approaches typically fine-tune multimodal large language models (MLLMs) using image-text pairs and corresponding mask labels. However, they exhibit limited generalization to out-of-distribution scenarios without an explicit reasoning process. Although recent efforts leverage reinforcement learning through group-relative policy optimization (GRPO) to enhance reasoning ability, they often suffer from overthinking - producing uniformly verbose reasoning chains irrespective of task complexity. This results in elevated computational costs and limited control over reasoning quality. To address this problem, we propose PixelThink, a simple yet effective scheme that integrates externally estimated task difficulty and internally measured model uncertainty to regulate reasoning generation within a reinforcement learning paradigm. The model learns to compress reasoning length in accordance with scene complexity and predictive confidence. To support comprehensive evaluation, we introduce ReasonSeg-Diff, an extended benchmark with annotated reasoning references and difficulty scores, along with a suite of metrics designed to assess segmentation accuracy, reasoning quality, and efficiency jointly. Experimental results demonstrate that the proposed approach improves both reasoning efficiency and overall segmentation performance. Our work contributes novel perspectives towards efficient and interpretable multimodal understanding. The code and model will be publicly available.