COGNITION: From Evaluation to Defense against Multimodal LLM CAPTCHA Solvers

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the threat posed by multimodal large language models (MLLMs) to CAPTCHA security. We design end-to-end破解 experiments targeting mainstream MLLMs, incorporating single-shot inference, few-shot learning, and prompt engineering strategies, and empirically measure success rates, computational cost, and latency. Results show that current MLLMs can break most recognition-based CAPTCHAs at low cost and low latency, yet remain robust against tasks requiring fine-grained spatial localization or multi-step geometric reasoning. Through interpretability analysis, we identify critical failure and success mechanisms—particularly in visual-semantic alignment and structured output generation. Based on these findings, we propose defense-oriented CAPTCHA design principles: explicitly enforcing spatial relational modeling and stepwise reasoning requirements. This work provides both theoretical foundations and practical guidelines for developing next-generation automation-resistant CAPTCHAs.

Technology Category

Application Category

📝 Abstract
This paper studies how multimodal large language models (MLLMs) undermine the security guarantees of visual CAPTCHA. We identify the attack surface where an adversary can cheaply automate CAPTCHA solving using off-the-shelf models. We evaluate 7 leading commercial and open-source MLLMs across 18 real-world CAPTCHA task types, measuring single-shot accuracy, success under limited retries, end-to-end latency, and per-solve cost. We further analyze the impact of task-specific prompt engineering and few-shot demonstrations on solver effectiveness. We reveal that MLLMs can reliably solve recognition-oriented and low-interaction CAPTCHA tasks at human-like cost and latency, whereas tasks requiring fine-grained localization, multi-step spatial reasoning, or cross-frame consistency remain significantly harder for current models. By examining the reasoning traces of such MLLMs, we investigate the underlying mechanisms of why models succeed/fail on specific CAPTCHA puzzles and use these insights to derive defense-oriented guidelines for selecting and strengthening CAPTCHA tasks. We conclude by discussing implications for platform operators deploying CAPTCHA as part of their abuse-mitigation pipeline.Code Availability (https://anonymous.4open.science/r/Captcha-465E/).
Problem

Research questions and friction points this paper is trying to address.

Evaluates MLLMs' ability to solve diverse visual CAPTCHA tasks
Analyzes attack surfaces and cost-effectiveness of automated CAPTCHA solving
Proposes defense guidelines by identifying MLLMs' weaknesses in specific CAPTCHA types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates multimodal LLMs on real-world CAPTCHA tasks
Analyzes impact of prompt engineering on solver effectiveness
Derives defense guidelines from model reasoning traces
🔎 Similar Papers
No similar papers found.