DiffVox: A Differentiable Model for Capturing and Analysing Professional Effects Distributions

📅 2025-04-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenges of modeling effect parameter distributions and ensuring interpretability in professional vocal mixing. To this end, we propose DiffVox—the first differentiable and interpretable vocal effect modeling framework. Methodologically, DiffVox integrates differentiable signal processing modules with a parametric effect chain and incorporates PCA alongside McAdams’ perceptual timbral dimensions (e.g., brightness, spaciousness), enabling the first systematic characterization of non-Gaussian distributions and strong inter-parameter couplings—such as high–low frequency coordination—in vocal EQ, compression, delay, and reverb parameters. Evaluated on 435 professionally mixed vocal tracks from MedleyDB and a private dataset, our analysis reveals statistically significant correlations (p < 0.01) between learned parameter dependencies and perceptual timbral attributes. The framework’s open-source implementation and accompanying dataset are released to advance research in automated mixing.

Technology Category

Application Category

📝 Abstract
This study introduces a novel and interpretable model, DiffVox, for matching vocal effects in music production. DiffVox, short for ``Differentiable Vocal Fx", integrates parametric equalisation, dynamic range control, delay, and reverb with efficient differentiable implementations to enable gradient-based optimisation for parameter estimation. Vocal presets are retrieved from two datasets, comprising 70 tracks from MedleyDB and 365 tracks from a private collection. Analysis of parameter correlations highlights strong relationships between effects and parameters, such as the high-pass and low-shelf filters often behaving together to shape the low end, and the delay time correlates with the intensity of the delayed signals. Principal component analysis reveals connections to McAdams' timbre dimensions, where the most crucial component modulates the perceived spaciousness while the secondary components influence spectral brightness. Statistical testing confirms the non-Gaussian nature of the parameter distribution, highlighting the complexity of the vocal effects space. These initial findings on the parameter distributions set the foundation for future research in vocal effects modelling and automatic mixing. Our source code and datasets are accessible at https://github.com/SonyResearch/diffvox.
Problem

Research questions and friction points this paper is trying to address.

Develops DiffVox for matching vocal effects in music production
Analyzes parameter correlations in vocal effects distributions
Explores non-Gaussian nature of vocal effects parameter space
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable model for vocal effects optimization
Integrates EQ, dynamics, delay, reverb efficiently
PCA links parameters to timbre dimensions
🔎 Similar Papers
No similar papers found.
Chin-Yun Yu
Chin-Yun Yu
Queen Mary University of London
DSPmusic information retrievalmachine learning
M
Marco A. Mart'inez-Ram'irez
Sony AI, Tokyo, Japan
Junghyun Koo
Junghyun Koo
Sony AI / Sony Research
Intelligent Music ProductionControllable Generative ModelsSource Separation
B
Ben Hayes
Centre for Digital Music, Queen Mary University of London, London, UK
W
Wei-Hsiang Liao
Sony AI, Tokyo, Japan
G
Gyorgy Fazekas
Centre for Digital Music, Queen Mary University of London, London, UK
Yuki Mitsufuji
Yuki Mitsufuji
Distinguished Engineer, Sony
Machine LearningAudioSource SeparationMusic TechnologySpatial Audio