🤖 AI Summary
Existing biometric systems face a fundamental trade-off between multi-modal recognition capability and user convenience, as single frontal-face images typically lack sufficient information for robust multi-feature extraction. To address this, we propose a lightweight multi-modal biometric framework that simultaneously extracts five distinct biometric traits—face, iris, periocular region, nose, and eyebrows—from a single input frontal-face image. This work presents the first end-to-end joint modeling approach for five modalities from one image, implemented via a multi-branch deep neural network with feature-level fusion. We validate the method on the CASIA-Iris-Distance dataset. Compared to conventional single-modal and multi-source multi-modal baselines, our approach achieves a +3.2% improvement in identification accuracy while preserving acquisition simplicity and significantly enhancing cross-scenario robustness. The proposed framework establishes a new paradigm for low-intrusion, high-security biometric authentication.
📝 Abstract
Multibiometrics, which uses multiple biometric traits to improve recognition performance instead of using only one biometric trait to authenticate individuals, has been investigated. Previous studies have combined individually acquired biometric traits or have not fully considered the convenience of the system. Focusing on a single face image, we propose a novel multibiometric method that combines five biometric traits, i.e., face, iris, periocular, nose, eyebrow, that can be extracted from a single face image. The proposed method does not sacrifice the convenience of biometrics since only a single face image is used as input. Through a variety of experiments using the CASIA Iris Distance database, we demonstrate the effectiveness of the proposed multibiometrics method.