🤖 AI Summary
Micro-expression recognition (MER) faces two key challenges: entanglement between static appearance and dynamic motion features, and a semantic gap between textual labels and underlying facial muscle actions. To address these, we propose Uni-MER, a multi-expert disentangled architecture, and introduce the first motion-driven instruction dataset for MER. Leveraging dual supervision from optical flow and action units, our framework explicitly separates facial dynamics into structural, textural, and semantic components. We design three specialized experts—motion, appearance, and semantics—and integrate a multimodal large language model to achieve cross-modal alignment. Evaluated on multiple standard MER benchmarks, Uni-MER achieves state-of-the-art performance, significantly improving fine-grained local motion capture accuracy and model interpretability. Notably, it establishes the first physically grounded semantic alignment between low-level motion representations and high-level linguistic supervision.
📝 Abstract
Micro expression recognition (MER) is crucial for inferring genuine emotion. Applying a multimodal large language model (MLLM) to this task enables spatio-temporal analysis of facial motion and provides interpretable descriptions. However, there are still two core challenges: (1) The entanglement of static appearance and dynamic motion cues prevents the model from focusing on subtle motion; (2) Textual labels in existing MER datasets do not fully correspond to underlying facial muscle movements, creating a semantic gap between text supervision and physical motion. To address these issues, we propose DEFT-LLM, which achieves motion semantic alignment by multi-expert disentanglement. We first introduce Uni-MER, a motion-driven instruction dataset designed to align text with local facial motion. Its construction leverages dual constraints from optical flow and Action Unit (AU) labels to ensure spatio-temporal consistency and reasonable correspondence to the movements. We then design an architecture with three experts to decouple facial dynamics into independent and interpretable representations (structure, dynamic textures, and motion-semantics). By integrating the instruction-aligned knowledge from Uni-MER into DEFT-LLM, our method injects effective physical priors for micro expressions while also leveraging the cross modal reasoning ability of large language models, thus enabling precise capture of subtle emotional cues. Experiments on multiple challenging MER benchmarks demonstrate state-of-the-art performance, as well as a particular advantage in interpretable modeling of local facial motion.