AUTOMATIC MUSIC GENERATION

Date
2023-05
Authors
KHANAL, PRAVESH
BHANDARI, SANJEEV
LAMICHHANE, SRIJANA
TAMANG, SUDIP
Journal Title
Journal ISSN
Volume Title
Publisher
I.O.E. Pulchowk Campus
Abstract
‘Automatic Music Generation’ composes short pieces of music using different parameters like notes, pitch interval, and chords. This project uses the concept of RNN (Recurrent Neural Network) and LSTM (Long Short Term Memory) to generate music using models. The traditional way of composing music requires much trial and error. With automatic music generation, we can predict suitable follow-up music using AI rather than testing music in a studio, effectively saving time. The main focus of this project is to use the LSTM-NN model and algorithm approach to generate music while ensuring that the output is synchronized between two separate outputs. The dataset used in this project was sourced from the ESAC Folk database[1]. The original format of the dataset was in .kern file format, which was converted into MIDI format for use in this project. MIDI files were used as a music data source, encoded into a time-series notation format, and used to train the model. This project uses the concept of dependencies on the time series music sequence to train a model. The trained model can generate time series notation and decode our generated music to obtain a music file.
Description
‘Automatic Music Generation’ composes short pieces of music using different parameters like notes, pitch interval, and chords. This project uses the concept of RNN (Recurrent Neural Network) and LSTM (Long Short Term Memory) to generate music using models.
Keywords
Tone-matrix,, Generation,, model
Citation