Go witheFlow: Real-time Emotion Driven Audio Effects Modulation

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of emotion-driven, real-time modulation of audio effects during live musical performance. We propose an affective computing framework that jointly analyzes real-time physiological signals—including heart rate variability and galvanic skin response—with multidimensional audio features. The framework implements a lightweight, on-device, open-source human-computer co-creative music system. To our knowledge, this is the first work to achieve millisecond-level (end-to-end latency < 80 ms) joint affective decoding from biosignals and audio, with direct parameter mapping to digital audio workstation (DAW) plug-ins. Implemented in Python/C++, the system runs robustly on commodity laptops and supports seamless integration with mainstream open-source biosensors and DAWs. Empirical evaluation in authentic performance settings demonstrates significant improvements in expressive consistency and naturalness of human–computer collaboration. The system establishes a reproducible technical paradigm bridging affective computing and interactive music creation.

Technology Category

Application Category

📝 Abstract
Music performance is a distinctly human activity, intrinsically linked to the performer's ability to convey, evoke, or express emotion. Machines cannot perform music in the human sense; they can produce, reproduce, execute, or synthesize music, but they lack the capacity for affective or emotional experience. As such, music performance is an ideal candidate through which to explore aspects of collaboration between humans and machines. In this paper, we introduce the witheFlow system, designed to enhance real-time music performance by automatically modulating audio effects based on features extracted from both biosignals and the audio itself. The system, currently in a proof-of-concept phase, is designed to be lightweight, able to run locally on a laptop, and is open-source given the availability of a compatible Digital Audio Workstation and sensors.
Problem

Research questions and friction points this paper is trying to address.

Real-time emotion-driven audio effects modulation for music performance
Enhancing human-machine collaboration through biosignal and audio analysis
Developing lightweight open-source system for automatic performance enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modulates audio effects using biosignal and audio features
Runs locally on laptops as lightweight open-source system
Enables real-time emotion-driven human-machine music collaboration
🔎 Similar Papers
No similar papers found.