Who Gets Heard? Rethinking Fairness in AI for Music Systems

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies pervasive cultural and genre biases in music AI systems, particularly misrepresenting and distorting marginalized musical traditions—such as Indian rāga—from the Global South, thereby eroding creator trust, constraining creative expression, and exacerbating cultural erasure. To address this, we propose a three-tier fairness-enhancement framework spanning data curation, model design, and human–AI interaction, integrating critical AI analysis, cross-cultural musicology, participatory data governance, and inclusive interface design. Our key contribution is the first systematic deconstruction of bias propagation pathways across the AI development lifecycle, coupled with deep contextual embedding of cultural knowledge at every stage. Empirical evaluation demonstrates significant improvements in both musical representation accuracy and cultural sensitivity. The framework provides a transferable, methodology-driven foundation for developing transparent, trustworthy, and pluralistic music AI systems. (149 words)

Technology Category

Application Category

📝 Abstract
In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators'trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.
Problem

Research questions and friction points this paper is trying to address.

Addressing cultural and genre biases in AI music systems
Preventing misrepresentation of marginalized musical traditions globally
Reducing biased outputs that limit creativity and cultural erasure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Address biases at dataset level in music-AI systems
Implement model-level changes to reduce cultural misrepresentation
Redesign interface to enhance fairness for marginalized traditions
🔎 Similar Papers
No similar papers found.