Publication: Music Generation Using RNN-LSTM with Self-Attention Mechanism
Loading...
Date
KU Authors
Advisor
Journal Title
Journal ISSN
Volume Title
Type
Publisher
Abstract
Music generation using artificial intelligence is a rapidly evolving domain that bridges the gap between creativity and computational intelligence, offering promising applications in entertainment, education, and therapy. In this paper, a Recurrent Neural Network (RNN) model with Long Short-Term Memory (LSTM) networks for music generation was employed, utilizing the Pretty Midi library. Features were extracted from MIDI files in the dataset and fed these notes into a model composed of three LSTM layers. To prevent overfitting, dropout layers were incorporated. The model was trained on a diverse set of MIDI files, allowing it to capture various musical styles and patterns. The trained model demonstrated high accuracy in music generation, producing coherent and stylistically consistent pieces. Experimental results show that the LSTM + Self-Attention model outperformed baseline RNN, LSTM, and BiLSTM models, achieving the lowest validation loss (0.47), confirming its effectiveness for the complex task of music generation.
Description
Citation
M. Abdelalim, M. Bashar, H. Nemer and W. Elmasry, "Music Generation Using RNN-LSTM with Self-Attention Mechanism," 2025 9th International Symposium on Innovative Approaches in Smart Technologies (ISAS), Gaziantep, Turkiye, 2025, pp. 1-8.
