🤖 AI Summary
This paper addresses the challenge of multimodal understanding of facial states—specifically action units (AUs) and emotions—by proposing MF², a multimodal foundation model for facial analysis. Methodologically, it introduces MFA, the first hierarchical facial language description dataset, synthesized using GPT-4o; designs the MF² architecture integrating local and global visual modeling via multi-scale CNN/ViT encoders and cross-modal contrastive alignment; and incorporates a parameter-efficient decoupled fine-tuning network (DFN) to enhance cross-task generalization. Contributions include: (1) the first unified multimodal modeling framework jointly capturing AUs and emotions; (2) the MFA dataset, which fills a critical gap in fine-grained facial semantic descriptions; and (3) state-of-the-art performance on both AU recognition and emotion classification—achieving superior accuracy, reduced computational cost, and strong cross-dataset transferability.
📝 Abstract
Multimodal foundation models have significantly improved feature representation by integrating information from multiple modalities, making them highly suitable for a broader set of applications. However, the exploration of multimodal facial representation for understanding perception has been limited. Understanding and analyzing facial states, such as Action Units (AUs) and emotions, require a comprehensive and robust framework that bridges visual and linguistic modalities. In this paper, we present a comprehensive pipeline for multimodal facial state analysis. First, we compile a new Multimodal Face Dataset (MFA) by generating detailed multilevel language descriptions of face, incorporating Action Unit (AU) and emotion descriptions, by leveraging GPT-4o. Second, we introduce a novel Multilevel Multimodal Face Foundation model (MF^2) tailored for Action Unit (AU) and emotion recognition. Our model incorporates comprehensive visual feature modeling at both local and global levels of face image, enhancing its ability to represent detailed facial appearances. This design aligns visual representations with structured AU and emotion descriptions, ensuring effective cross-modal integration. Third, we develop a Decoupled Fine-Tuning Network (DFN) that efficiently adapts MF^2 across various tasks and datasets. This approach not only reduces computational overhead but also broadens the applicability of the foundation model to diverse scenarios. Experimentation show superior performance for AU and emotion detection tasks.