Fusing Memory and Attention: A study on LSTM, Transformer and Hybrid Architectures for Symbolic Music Generation

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that existing symbolic music generation models struggle to simultaneously maintain local melodic continuity and global structural coherence. The authors systematically compare the modeling capabilities of LSTM and Transformer architectures for this task, introducing the first evaluation based on 17 fine-grained musical quality metrics. They propose a novel hybrid architecture that leverages a Transformer encoder to capture global structure while employing an LSTM decoder to preserve local fluency. Evaluating on the Deutschl dataset, the model generates 1,000 melodies, with ablation studies and human listening tests demonstrating that the hybrid approach significantly outperforms individual baseline models in both local continuity and global consistency.

Technology Category

Application Category

📝 Abstract
Machine learning techniques, such as Transformers and Long Short-Term Memory (LSTM) networks, play a crucial role in Symbolic Music Generation (SMG). Existing literature indicates a difference between LSTMs and Transformers regarding their ability to model local melodic continuity versus maintaining global structural coherence. However, their specific properties within the context of SMG have not been systematically studied. This paper addresses this gap by providing a fine-grained comparative analysis of LSTMs versus Transformers for SMG, examining local and global properties in detail using 17 musical quality metrics on the Deutschl dataset. We find that LSTM networks excel at capturing local patterns but fail to preserve long-range dependencies, while Transformers model global structure effectively but tend to produce irregular phrasing. Based on this analysis and leveraging their respective strengths, we propose a Hybrid architecture combining a Transformer Encoder with an LSTM Decoder and evaluate it against both baselines. We evaluated 1,000 generated melodies from each of the three architectures on the Deutschl dataset. The results show that the hybrid method achieves better local and global continuity and coherence compared to the baselines. Our work highlights the key characteristics of these models and demonstrates how their properties can be leveraged to design superior models. We also supported the experiments with ablation studies and human perceptual evaluations, which statistically support the findings and provide robust validation for this work.
Problem

Research questions and friction points this paper is trying to address.

Symbolic Music Generation
LSTM
Transformer
local continuity
global coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Architecture
Symbolic Music Generation
LSTM
Transformer
Local-Global Coherence
S
Soudeep Ghoshal
Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, India
S
Sandipan Chakraborty
Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, India
P
Pradipto Chowdhury
Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, India
Himanshu Buckchash
Himanshu Buckchash
University of Applied Sciences Krems, Austria
Deep learningcomputer visionhealthcaresustainability