🤖 AI Summary
Large vision-language models (VLMs) lack systematic evaluation for image-induced emotion recognition. Method: We introduce EmoBench—the first dedicated benchmark for this task—enabling zero-shot and few-shot evaluation, adversarial robustness analysis, and fine-grained error typology. Leveraging multi-model assessment and human-in-the-loop diagnosis, we identify three core bottlenecks: suboptimal prompt engineering, insufficient emotional coverage in training data, and low-quality cross-modal alignment. Contribution/Results: (1) EmoBench establishes the first standardized benchmark for induced-emotion recognition; (2) we propose a “Representation–Instruction–Alignment” tri-dimensional framework for performance attribution; (3) we provide reproducible fine-tuning strategies to mitigate identified limitations. Our findings reveal pervasive affective biases and strong contextual sensitivity in current VLMs, offering both theoretical insights and practical guidelines for enhancing empathetic human-AI interaction.
📝 Abstract
Large Vision-Language Models (VLMs) have achieved unprecedented success in several objective multimodal reasoning tasks. However, to further enhance their capabilities of empathetic and effective communication with humans, improving how VLMs process and understand emotions is crucial. Despite significant research attention on improving affective understanding, there is a lack of detailed evaluations of VLMs for emotion-related tasks, which can potentially help inform downstream fine-tuning efforts. In this work, we present the first comprehensive evaluation of VLMs for recognizing evoked emotions from images. We create a benchmark for the task of evoked emotion recognition and study the performance of VLMs for this task, from perspectives of correctness and robustness. Through several experiments, we demonstrate important factors that emotion recognition performance depends on, and also characterize the various errors made by VLMs in the process. Finally, we pinpoint potential causes for errors through a human evaluation study. We use our experimental results to inform recommendations for the future of emotion research in the context of VLMs.