HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses through Reasoning MLLMs

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) face two key bottlenecks in human-centered interaction: (1) the absence of fine-grained evaluation frameworks tailored to human-centric scenarios, and (2) limited capability in generating empathetic, context-aware responses. To address these, we propose HumanSense—a novel multimodal benchmark specifically designed for evaluating complex human intent understanding and empathetic response generation. Methodologically, we introduce a parameter-efficient framework integrating Omni-modal fusion, multi-stage modality-gradual reinforcement learning, and chain-of-thought prompting—requiring no model fine-tuning. Experiments reveal substantial performance gaps among state-of-the-art models in empathy and contextual reasoning; augmenting visual inputs with audio-text modalities significantly improves results; and reinforcement learning yields notable gains across core metrics. HumanSense establishes both an evaluation paradigm and a technical pathway toward embodied, empathetic multimodal agents.

Technology Category

Application Category

📝 Abstract
While Multimodal Large Language Models (MLLMs) show immense promise for achieving truly human-like interactions, progress is hindered by the lack of fine-grained evaluation frameworks for human-centered scenarios, encompassing both the understanding of complex human intentions and the provision of empathetic, context-aware responses. Here we introduce HumanSense, a comprehensive benchmark designed to evaluate the human-centered perception and interaction capabilities of MLLMs, with a particular focus on deep understanding of extended multimodal contexts and the formulation of rational feedback. Our evaluation reveals that leading MLLMs still have considerable room for improvement, particularly for advanced interaction-oriented tasks. Supplementing visual input with audio and text information yields substantial improvements, and Omni-modal models show advantages on these tasks. Furthermore, we argue that appropriate feedback stems from a contextual analysis of the interlocutor's needs and emotions, with reasoning ability serving as the key to unlocking it. Accordingly, we employ a multi-stage, modality-progressive reinforcement learning to enhance the reasoning abilities of an Omni model, achieving substantial gains on evaluation results. Additionally, we observe that successful reasoning processes exhibit highly consistent thought patterns. By designing corresponding prompts, we also enhance the performance of non-reasoning models in a training-free manner. Project page: extcolor{brightpink}https://digital-avatar.github.io/ai/HumanSense/
Problem

Research questions and friction points this paper is trying to address.

Lack fine-grained evaluation frameworks for human-centered MLLM interactions
Need deeper understanding of multimodal contexts for rational feedback
Improving reasoning abilities to provide empathetic, context-aware responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal benchmark for human-centered MLLM evaluation
Modality-progressive reinforcement learning for reasoning
Training-free prompt design for non-reasoning models
🔎 Similar Papers
No similar papers found.
Z
Zheng Qin
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University
R
Ruobing Zheng
Ant Group
Yabing Wang
Yabing Wang
Xi’an Jiaotong University
multimodal learning
T
Tianqi Li
Ant Group
Yi Yuan
Yi Yuan
NetEase Fuxi AI Lab
deep learningcomputer vision
J
Jingdong Chen
Ant Group
L
Le Wang
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University