AuthGlass: Enhancing Voice Authentication on Smart Glasses via Air-Bone Acoustic Features

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Voice authentication for smart glasses is vulnerable to replay/synthetic spoofing attacks and environmental noise, exhibiting insufficient robustness. To address this, we propose the first air-conduction (AC) and bone-conduction (BC) dual-modal voice authentication method tailored for smart glasses. We design a prototype system integrating 14 AC microphones and 2 BC sensors to form a redundant, synchronized acoustic sensing array. We further introduce a fusion modeling algorithm that jointly encodes spatial sound-field features and craniofacial vibration dynamics, simultaneously enhancing liveness discrimination and noise robustness. Evaluated in multi-scenario experiments involving 42 participants, our method achieves 98.7% authentication accuracy—outperforming single-modal baselines by +12.4%—and maintains stable performance under realistic adversarial conditions, including high ambient noise and replay attacks. This work pioneers on-glasses dual-path acoustic协同 perception, establishing a new paradigm for secure voice interaction on wearable devices.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of smart glasses, voice interaction has become widely deployed due to its naturalness and convenience. However, its practicality is often undermined by the vulnerability to spoofing attacks and interference from surrounding sounds, making seamless voice authentication crucial for smart glasses usage. To address this challenge, we propose AuthGlass, a voice authentication approach that leverages both air- and bone-conducted speech features to enhance accuracy and liveness detection. Aiming to gain comprehensive knowledge on speech-related acoustic and vibration features, we built a smart glasses prototype with redundant synchronized microphones: 14 air-conductive microphones and 2 bone-conductive units. In a study with 42 participants, we validated that combining sound-field and vibration features significantly improves authentication robustness and attack resistance. Furthermore, experiments demonstrated that AuthGlass maintains competitive accuracy even under various practical scenarios, highlighting its applicability and scalability for real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Addressing voice authentication vulnerability to spoofing attacks on smart glasses
Overcoming interference from surrounding sounds during voice authentication processes
Enhancing liveness detection accuracy for secure voice interaction on wearables
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging air-bone acoustic features for authentication
Using 14 air and 2 bone microphones synchronized
Combining sound-field vibration features improves security
🔎 Similar Papers
No similar papers found.
W
Weiye Xu
Tsinghua University, China
Zhang Jiang
Zhang Jiang
Physicist at Argonne National Laboratory
film and surfaceGISAXSXPCScoherent imagingspeckles
S
Siqi Zheng
Tsinghua University, China
Xiyuxing Zhang
Xiyuxing Zhang
Tsinghua University
ubiquitous computingtinymlwearable sensing
Y
Yankai Zhao
Southern University of Science and Technology, China
C
Changhao Zhang
Ant Group, China
J
Jian Liu
Ant Group, China
W
Weiqiang Wang
Ant Group, China
Yuntao Wang
Yuntao Wang
Tsinghua University
Human-Computer InteractionUbiquitous ComputingPhysio-Behavioral Computing