Detecting Music Performance Errors with Transformers

📅 2025-01-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-based error detection tools for beginner music training rely heavily on explicit audio–score alignment and suffer from inaccurate pitch/rhythm error identification due to scarce annotated data. To address this, we propose an end-to-end audio-driven performance error detection framework. Our key contributions are: (1) Polytune Transformer—a self-supervised model that enables direct audio-to-symbolic-score mapping via implicit alignment; (2) the first controllable synthesis method for generating musically plausible erroneous performance data, effectively mitigating the bottleneck of real-world annotations; and (3) a unified architecture supporting multi-instrument joint modeling and cross-instrument generalization. Evaluated across 14 instruments, our framework achieves a mean error detection F1-score of 64.1%, outperforming the state of the art by 40 percentage points. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Beginner musicians often struggle to identify specific errors in their performances, such as playing incorrect notes or rhythms. There are two limitations in existing tools for music error detection: (1) Existing approaches rely on automatic alignment; therefore, they are prone to errors caused by small deviations between alignment targets.; (2) There is a lack of sufficient data to train music error detection models, resulting in over-reliance on heuristics. To address (1), we propose a novel transformer model, Polytune, that takes audio inputs and outputs annotated music scores. This model can be trained end-to-end to implicitly align and compare performance audio with music scores through latent space representations. To address (2), we present a novel data generation technique capable of creating large-scale synthetic music error datasets. Our approach achieves a 64.1% average Error Detection F1 score, improving upon prior work by 40 percentage points across 14 instruments. Additionally, compared with existing transcription methods repurposed for music error detection, our model can handle multiple instruments. Our source code and datasets are available at https://github.com/ben2002chou/Polytune.
Problem

Research questions and friction points this paper is trying to address.

Music Error Detection
Alignment Accuracy
Data Limitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polytune
Transformer-based Error Detection
Synthetic Error Data Generation
🔎 Similar Papers
No similar papers found.
Benjamin Shiue-Hal Chou
Benjamin Shiue-Hal Chou
PhD student, Purdue University
Music and Artificial IntelligenceComputer Vision
Purvish Jajal
Purvish Jajal
Purdue University
Deep Learning
Nick Eliopoulos
Nick Eliopoulos
Purdue University
Computer VisionMachine LearningEdge Devices
T
Tim Nadolsky
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, 47907
C
Cheng-Yun Yang
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, 47907
J
James C. Davis
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, 47907
Kristen Yeon-Ji Yun
Kristen Yeon-Ji Yun
Purdue University
AI in Music
Y
Yung-Hsiang Lu
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA, 47907