🤖 AI Summary
Conventional brain–AI interfaces (BAIs) rely on motor execution, motor imagery, or syntactic language structures, limiting accessibility for individuals with motor or speech impairments.
Method: This work proposes Auditory Intention Decoding (AID), a novel paradigm enabling action-free, language-structure-free direct brain–machine dialogue. Implemented on the EEGChat platform, AID employs multi-option auditory stimuli to elicit event-related potentials (ERPs) from scalp EEG, decoded in real time via machine learning classifiers.
Contribution/Results: AID is the first auditory-based BCI paradigm explicitly designed for natural conversational interaction—eliminating dependence on motor imagery and visual input while supporting high-temporal-resolution, direct-intention selection. Feasibility experiments demonstrate statistically significant single-trial decoding accuracy (p < 0.001), providing the first systematic validation of auditory ERP-based BAIs for low-cognitive-load, real-time communication. This establishes a new accessible interaction pathway for individuals with aphasia or motor disabilities.
📝 Abstract
We introduce a novel auditory brain-computer interface (BCI) paradigm, Auditory Intention Decoding (AID), designed to enhance communication capabilities within the brain-AI interface (BAI) system EEGChat. AID enables users to select among multiple auditory options (intentions) by analyzing their brain responses, offering a pathway to construct a communication system that requires neither muscle movement nor syntactic formation. To evaluate the feasibility of this paradigm, we conducted a proof-of-concept study. The results demonstrated statistically significant decoding performance, validating the approach's potential. Despite these promising findings, further optimization is required to enhance system performance and realize the paradigm's practical application.