FaceLLM: A Multimodal Large Language Model for Face Understanding

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) exhibit limited performance in facial image understanding—including facial expression, pose, skin quality, and demographic attribute analysis—primarily due to the scarcity of large-scale, high-quality annotated datasets. To address this, we propose a weakly supervised data construction framework: leveraging attribute-aware prompting with ChatGPT to automatically generate fine-grained facial analysis question-answer pairs, resulting in FairFaceGPT—the first large-scale synthetic dataset dedicated to facial understanding. We further introduce FaceLLM, a specialized MLLM that integrates attribute-enhanced prompt engineering with synthetic-data-driven fine-tuning to achieve deep semantic parsing of facial visual features. Extensive experiments demonstrate that FaceLLM achieves state-of-the-art performance across multiple facial analysis benchmarks, validating the efficacy of synthetic-data-driven, domain-specialized MLLM development.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have shown remarkable performance in vision-language tasks. However, existing MLLMs are primarily trained on generic datasets, limiting their ability to reason on domain-specific visual cues such as those in facial images. In particular, tasks that require detailed understanding of facial structure, expression, emotion, and demographic features remain underexplored by MLLMs due to the lack of large-scale annotated face image-text datasets. In this work, we introduce FaceLLM, a multimodal large language model trained specifically for facial image understanding. To construct the training data, we propose a novel weakly supervised pipeline that uses ChatGPT with attribute-aware prompts to generate high-quality question-answer pairs based on images from the FairFace dataset. The resulting corpus, called FairFaceGPT, covers a diverse set of attributes including expression, pose, skin texture, and forensic information. Our experiments demonstrate that FaceLLM improves the performance of MLLMs on various face-centric tasks and achieves state-of-the-art performance. This work highlights the potential of synthetic supervision via language models for building domain-specialized MLLMs, and sets a precedent for trustworthy, human-centric multimodal AI systems. FairFaceGPT dataset and pretrained FaceLLM models are publicly available in the project page.
Problem

Research questions and friction points this paper is trying to address.

Enhancing facial image understanding with specialized multimodal models
Addressing lack of annotated face image-text datasets for training
Improving performance on face-centric tasks using synthetic supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ChatGPT for generating face image QA pairs
Trains specialized MLLM for facial understanding
Creates FairFaceGPT dataset with diverse attributes
🔎 Similar Papers
No similar papers found.