🤖 AI Summary
Generative AI music synthesis faces challenges including high energy consumption, copyright infringement risks, and constrained creative expressivity. To address these, this work proposes a training-free, data-agnostic stochastic recurrent neural network (RNN) architecture—employing fully randomly initialized LSTM or GRU units—integrated with temporal parameterization control and real-time audio synthesis interfaces. The method enables low-latency (<10 ms), milliwatt-level power consumption for controllable musical signal generation, bypassing end-to-end learning paradigms entirely. It supports interactive, improvisational generation of configurable musical elements—including arpeggios and low-frequency oscillators (LFOs)—while drastically reducing computational and energy overhead and eliminating dataset-dependent copyright liabilities. The implementation is lightweight and has been integrated into an open-source music production workflow platform. This establishes a novel human-AI co-creation paradigm requiring zero model training and imposing no copyright burden on musicians.
📝 Abstract
Generative artificial intelligence raises concerns related to energy consumption, copyright infringement and creative atrophy. We show that randomly initialized recurrent neural networks can produce arpeggios and low-frequency oscillations that are rich and configurable. In contrast to end-to-end music generation that aims to replace musicians, our approach expands their creativity while requiring no data and much less computational power. More information can be found at: https://allendia.com/