π€ AI Summary
To address the challenges of slow training, high distortion, excessive computational cost, and low reconstruction accuracy in EMG noise removal from EEG signals, this paper proposes TADAβa lightweight denoising model specifically designed for EEG time-series data. Methodologically, TADA introduces two key innovations: (1) a novel covariance-targeted adversarial training framework that explicitly regularizes the second-order statistics of noise residuals via logical covariance-targeted loss and covariance rescaling; and (2) a correlation-driven lightweight convolutional autoencoder architecture with fewer than 400K parameters. Evaluated on the EEGdenoiseNet benchmark, TADA significantly outperforms conventional filters and state-of-the-art deep learning models, achieving new SOTA performance in critical metrics such as correlation coefficient. It simultaneously ensures high-fidelity signal reconstruction, minimal distortion, and efficient inference.
π Abstract
Current machine learning (ML)-based algorithms for filtering electroencephalography (EEG) time series data face challenges related to cumbersome training times, regularization, and accurate reconstruction. To address these shortcomings, we present an ML filtration algorithm driven by a logistic covariance-targeted adversarial denoising autoencoder (TADA). We hypothesize that the expressivity of a targeted, correlation-driven convolutional autoencoder will enable effective time series filtration while minimizing compute requirements (e.g., runtime, model size). Furthermore, we expect that adversarial training with covariance rescaling will minimize signal degradation. To test this hypothesis, a TADA system prototype was trained and evaluated on the task of removing electromyographic (EMG) noise from EEG data in the EEGdenoiseNet dataset, which includes EMG and EEG data from 67 subjects. The TADA filter surpasses conventional signal filtration algorithms across quantitative metrics (Correlation Coefficient, Temporal RRMSE, Spectral RRMSE), and performs competitively against other deep learning architectures at a reduced model size of less than 400,000 trainable parameters. Further experimentation will be necessary to assess the viability of TADA on a wider range of deployment cases.