AAD-LLM: Neural Attention-Driven Auditory Scene Understanding

πŸ“… 2025-02-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing auditory foundation models neglect the human auditory selective attention mechanism, leading to poor alignment with listeners’ subjective perception in multi-speaker scenarios. To address this, we propose Intention-Informed Auditory Scene Understanding (II-ASU), a novel paradigm that, for the first time, integrates intracranial electroencephalography (iEEG)-decoded neural attention signals from listeners into large audio language models to enable intention-driven auditory understanding. Methodologically, we design an end-to-end architecture comprising an iEEG feature encoder, an attention-state classifier, and a conditional response generation module. Evaluated on multi-speaker description, transcription, source separation, and question-answering tasks, II-ASU achieves substantial improvements: +23.6% in subjective intention alignment, 18.4% reduction in word error rate (WER), and 15.2% increase in BLEU score. These results empirically validate the effectiveness and feasibility of neurofeedback-enhanced auditory AI.

Technology Category

Application Category

πŸ“ Abstract
Auditory foundation models, including auditory large language models (LLMs), process all sound inputs equally, independent of listener perception. However, human auditory perception is inherently selective: listeners focus on specific speakers while ignoring others in complex auditory scenes. Existing models do not incorporate this selectivity, limiting their ability to generate perception-aligned responses. To address this, we introduce Intention-Informed Auditory Scene Understanding (II-ASU) and present Auditory Attention-Driven LLM (AAD-LLM), a prototype system that integrates brain signals to infer listener attention. AAD-LLM extends an auditory LLM by incorporating intracranial electroencephalography (iEEG) recordings to decode which speaker a listener is attending to and refine responses accordingly. The model first predicts the attended speaker from neural activity, then conditions response generation on this inferred attentional state. We evaluate AAD-LLM on speaker description, speech transcription and extraction, and question answering in multitalker scenarios, with both objective and subjective ratings showing improved alignment with listener intention. By taking a first step toward intention-aware auditory AI, this work explores a new paradigm where listener perception informs machine listening, paving the way for future listener-centered auditory systems. Demo and code available: https://aad-llm.github.io.
Problem

Research questions and friction points this paper is trying to address.

Incorporates listener attention into auditory models
Uses brain signals to refine auditory responses
Improves alignment with listener intention in AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates iEEG for attention decoding
Conditions responses on neural activity
Enhances listener intention alignment
πŸ”Ž Similar Papers
No similar papers found.
Xilin Jiang
Xilin Jiang
PhD student, Columbia University
Speech and AudioMachine ListeningMachine PerceptionMultimodal LLMBrain-Computer Interface
Sukru Samet Dindar
Sukru Samet Dindar
Columbia University
Brain-Computer InterfacesAudio and SpeechLarge Language ModelsAuditory Neuroscience
Vishal Choudhari
Vishal Choudhari
Electrical Engineering Ph.D. Candidate, Columbia University
Multimodal SystemsLarge Language ModelsSpeech and AudioBrain-Computer Interfaces
S
Stephan Bickel
Hofstra Northwell School of Medicine, USA; The Feinstein Institutes for Medical Research, USA
A
A. Mehta
Hofstra Northwell School of Medicine, USA; The Feinstein Institutes for Medical Research, USA
G
G. Mckhann
Department of Neurological Surgery, Columbia University, USA
A
A. Flinker
Neurology Department, New York University, USA
D
Daniel Friedman
Neurology Department, New York University, USA
N
N. Mesgarani
Department of Electrical Engineering, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, USA