EyeSeg: An Uncertainty-Aware Eye Segmentation Framework for AR/VR

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Eye segmentation in AR/VR systems suffers from reduced robustness due to motion blur, eyelid occlusion, and domain shift, thereby degrading gaze estimation accuracy. Method: This paper proposes an uncertainty-aware eye segmentation framework that pioneers the integration of Bayesian posterior uncertainty learning—under closed-set priors—into eye segmentation. We theoretically establish that uncertainty statistics effectively quantify segmentation confidence and enable weighted fusion of multiple gaze estimates. Our approach jointly designs a Bayesian deep learning architecture with a segmentation network to produce both pixel-wise segmentation masks and corresponding per-pixel uncertainty scores. Contribution/Results: The method achieves state-of-the-art performance across MIoU, E1, F1, and ACC metrics. It demonstrates significantly enhanced robustness under motion blur, occlusion, and cross-domain conditions, providing a reliable foundation for high-precision gaze estimation.

Technology Category

Application Category

📝 Abstract
Human-machine interaction through augmented reality (AR) and virtual reality (VR) is increasingly prevalent, requiring accurate and efficient gaze estimation which hinges on the accuracy of eye segmentation to enable smooth user experiences. We introduce EyeSeg, a novel eye segmentation framework designed to overcome key challenges that existing approaches struggle with: motion blur, eyelid occlusion, and train-test domain gaps. In these situations, existing models struggle to extract robust features, leading to suboptimal performance. Noting that these challenges can be generally quantified by uncertainty, we design EyeSeg as an uncertainty-aware eye segmentation framework for AR/VR wherein we explicitly model the uncertainties by performing Bayesian uncertainty learning of a posterior under the closed set prior. Theoretically, we prove that a statistic of the learned posterior indicates segmentation uncertainty levels and empirically outperforms existing methods in downstream tasks, such as gaze estimation. EyeSeg outputs an uncertainty score and the segmentation result, weighting and fusing multiple gaze estimates for robustness, which proves to be effective especially under motion blur, eyelid occlusion and cross-domain challenges. Moreover, empirical results suggest that EyeSeg achieves segmentation improvements of MIoU, E1, F1, and ACC surpassing previous approaches. The code is publicly available at https://github.com/JethroPeng/EyeSeg.
Problem

Research questions and friction points this paper is trying to address.

Improves gaze estimation accuracy in AR/VR via eye segmentation
Addresses motion blur, eyelid occlusion, and domain gaps
Introduces uncertainty-aware segmentation for robust performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian uncertainty learning for segmentation
Uncertainty-aware framework for AR/VR
Robust gaze estimation via uncertainty weighting
🔎 Similar Papers
No similar papers found.
Z
Zhengyuan Peng
Shanghai Jiao Tong University
J
Jianqing Xu
Tencent
S
Shen Li
National University of Singapore
J
Jiazhen Ji
Tencent
Y
Yuge Huang
Tencent
Jingyun Zhang
Jingyun Zhang
PhD student, Beihang University
J
Jinmin Li
Tsinghua University
S
Shouhong Ding
Tencent
R
Rizen Guo
Tencent
X
Xin Tan
East China Normal University
L
Lizhuang Ma
Shanghai Jiao Tong University, East China Normal University