Visual Agents as Fast and Slow Thinkers

📅 2024-08-16
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Current AI systems lack human-like dual-system cognition, resulting in insufficient robustness and interpretability on complex or novel visual tasks (e.g., VQA, reasoning-based segmentation). To address this, we propose FaST—a vision agent that introduces dynamic, uncertainty-driven switching between fast (System 1) and slow (System 2) cognitive modes via a Switch Adapter. FaST employs a hierarchical vision-language reasoning architecture and a transparent decision pipeline enabling confidence calibration and context-aware incremental fusion. Built upon LLaVA, FaST achieves 80.8% accuracy on VQA v2 and 48.7% GIoU on ReasonSeg—substantially outperforming state-of-the-art baselines. These results demonstrate FaST’s superior generalization, robustness, and interpretability in open-world visual understanding scenarios.

Technology Category

Application Category

📝 Abstract
Achieving human-level intelligence requires refining cognitive distinctions between System 1 and System 2 thinking. While contemporary AI, driven by large language models, demonstrates human-like traits, it falls short of genuine cognition. Transitioning from structured benchmarks to real-world scenarios presents challenges for visual agents, often leading to inaccurate and overly confident responses. To address the challenge, we introduce FaST, which incorporates the Fast and Slow Thinking mechanism into visual agents. FaST employs a switch adapter to dynamically select between System 1/2 modes, tailoring the problem-solving approach to different task complexity. It tackles uncertain and unseen objects by adjusting model confidence and integrating new contextual data. With this novel design, we advocate a flexible system, hierarchical reasoning capabilities, and a transparent decision-making pipeline, all of which contribute to its ability to emulate human-like cognitive processes in visual intelligence. Empirical results demonstrate that FaST outperforms various well-known baselines, achieving 80.8% accuracy over VQA^{v2} for visual question answering and 48.7% GIoU score over ReasonSeg for reasoning segmentation, demonstrate FaST's superior performance. Extensive testing validates the efficacy and robustness of FaST's core components, showcasing its potential to advance the development of cognitive visual agents in AI systems. The code is available at ttps://github.com/GuangyanS/Sys2-LLaVA.
Problem

Research questions and friction points this paper is trying to address.

AI Cognitive Processes
Visual Understanding
Fast vs. Slow Thinking
Innovation

Methods, ideas, or system contributions that make the work stand out.

FaST
Adaptive AI Thinking
Hierarchical Decision Making