Efficient Solutions for Mitigating Initialization Bias in Unsupervised Self-Adaptive Auditory Attention Decoding

πŸ“… 2025-09-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In unsupervised adaptive auditory attention decoding (AAD), initialization bias degrades decoding performance, while existing bias-correcting methods suffer from computational complexity that scales superlinearly with data volume. To address this, we propose three low-overhead, constant-complexity adaptive algorithms grounded in an enhanced stimulus reconstruction framework. Integrating unsupervised learning with adaptive signal processing, our approach enables robust electroencephalographic (EEG) attention tracking without labeled data. Crucially, the algorithms eliminate initialization bias while maintaining fixed per-iteration computational costβ€”unlike state-of-the-art (SOTA) methods, whose complexity grows linearly or worse with time-series length. Evaluated in multi-speaker scenarios, our methods achieve decoding accuracy comparable to SOTA while improving inference efficiency by an order of magnitude. The implementation is fully open-sourced, ensuring reproducibility.

Technology Category

Application Category

πŸ“ Abstract
Decoding the attended speaker in a multi-speaker environment from electroencephalography (EEG) has attracted growing interest in recent years, with neuro-steered hearing devices as a driver application. Current approaches typically rely on ground-truth labels of the attended speaker during training, necessitating calibration sessions for each user and each EEG set-up to achieve optimal performance. While unsupervised self-adaptive auditory attention decoding (AAD) for stimulus reconstruction has been developed to eliminate the need for labeled data, it suffers from an initialization bias that can compromise performance. Although an unbiased variant has been proposed to address this limitation, it introduces substantial computational complexity that scales with data size. This paper presents three computationally efficient alternatives that achieve comparable performance, but with a significantly lower and constant computational cost. The code for the proposed algorithms is available at https://github.com/YYao-42/Unsupervised_AAD.
Problem

Research questions and friction points this paper is trying to address.

Mitigating initialization bias in unsupervised self-adaptive auditory attention decoding
Reducing computational complexity in EEG-based speaker attention decoding
Eliminating need for labeled calibration data in neuro-steered hearing devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised self-adaptive auditory attention decoding
Computationally efficient bias mitigation alternatives
Constant low computational cost performance
πŸ”Ž Similar Papers
Y
Yuanyuan Yao
KU Leuven, Department of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, Belgium
S
Simon Geirnaert
KU Leuven, Department of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, Belgium; KU Leuven, Department of Neurosciences, Research Group ExpORL, Belgium
Tinne Tuytelaars
Tinne Tuytelaars
KU Leuven - PSI, Belgium
computer visioncontinual learning
Alexander Bertrand
Alexander Bertrand
KU Leuven
Signal Processingdistributed algorithmsEEGBCIsensor arrays