🤖 AI Summary
State-of-the-art commercial vision-language models (e.g., GPT, Claude, Gemini) exhibit poor performance—only ~21.9% accuracy—on challenging real-world CAPTCHA spatial reasoning tasks, revealing a fundamental deficiency in explicit, stepwise reasoning. Method: We introduce CAPTCHA-X, the first realistic CAPTCHA benchmark featuring fine-grained, human-annotated reasoning chains, along with five novel process-oriented evaluation metrics. We further propose a general reasoning-augmentation framework grounded in an agent architecture, integrating coordinate-based visual grounding, structured chain-of-thought prompting, and multi-stage verification. Contribution/Results: Our method achieves 83.9% average accuracy across five complex CAPTCHA categories—surpassing baselines by 62 percentage points—and provides the first systematic empirical validation that explicit spatial reasoning critically enhances the cognitive capabilities of vision-language models.
📝 Abstract
CAPTCHA, originally designed to distinguish humans from robots, has evolved into a real-world benchmark for assessing the spatial reasoning capabilities of vision-language models. In this work, we first show that step-by-step reasoning is crucial for vision-language models (VLMs) to solve CAPTCHAs, which represent high-difficulty spatial reasoning tasks, and that current commercial vision-language models still struggle with such reasoning. In particular, we observe that most commercial VLMs (e.g., Gemini, Claude, GPT, etc.) fail to effectively solve CAPTCHAs and thus achieve low accuracy (around 21.9 percent). However, our findings indicate that requiring the model to perform step-by-step reasoning before generating the final coordinates can significantly enhance its solving accuracy, underscoring the severity of the gap. To systematically study this issue, we introduce CAPTCHA-X, the first real-world CAPTCHA benchmark with reasoning, covering seven categories of CAPTCHAs (such as Gobang, hCaptcha, etc.) with step-by-step action solutions and grounding annotations. We further define five reasoning-oriented metrics that enable a comprehensive evaluation of models reasoning capabilities. To validate the effectiveness of reasoning, we also propose a general agentic VLM-based framework that incorporates the models inherent reasoning abilities. Our method achieves state-of-the-art performance across five high-difficulty CAPTCHA types, with an average solving accuracy of 83.9 percent, substantially surpassing existing baselines. These results reveal the limitations of current models and highlight the importance of reasoning in advancing visual-spatial challenges in the future.