🤖 AI Summary
This study addresses the degradation of gender classification performance caused by makeup, facial disguise, and similar factors. We propose an efficient gender recognition method based on color periocular images—specifically, the eyelids, eyebrows, and surrounding regions—employing a lightweight convolutional neural network (CNN) designed to extract discriminative texture and shape features without requiring full-face input. Evaluated on the private CVBL dataset, our model achieves 99% accuracy; on the public FM dataset, it attains 96% accuracy with only 7.235 million parameters—substantially outperforming state-of-the-art approaches. To our knowledge, this is the first systematic investigation demonstrating the strong gender-discriminative capability of the periocular region in the absence of global facial information. Moreover, our lightweight CNN architecture not only maintains high accuracy but also exhibits robust generalization and practical deployment potential.
📝 Abstract
Gender classification has emerged as a crucial aspect in various fields, including security, human-machine interaction, surveillance, and advertising. Nonetheless, the accuracy of this classification can be influenced by factors such as cosmetics and disguise. Consequently, our study is dedicated to addressing this concern by concentrating on gender classification using color images of the periocular region. The periocular region refers to the area surrounding the eye, including the eyelids, eyebrows, and the region between them. It contains valuable visual cues that can be used to extract key features for gender classification. This paper introduces a sophisticated Convolutional Neural Network (CNN) model that utilizes color image databases to evaluate the effectiveness of the periocular region for gender classification. To validate the model's performance, we conducted tests on two eye datasets, namely CVBL and (Female and Male). The recommended architecture achieved an outstanding accuracy of 99% on the previously unused CVBL dataset while attaining a commendable accuracy of 96% with a small number of learnable parameters (7,235,089) on the (Female and Male) dataset. To ascertain the effectiveness of our proposed model for gender classification using the periocular region, we evaluated its performance through an extensive range of metrics and compared it with other state-of-the-art approaches. The results unequivocally demonstrate the efficacy of our model, thereby suggesting its potential for practical application in domains such as security and surveillance.