Please use this identifier to cite or link to this item: https://elibrary.tucl.edu.np/handle/123456789/18844
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKHANAL, PRADEEP KUMAR-
dc.contributor.authorKAYASTHA, ACHYUT-
dc.contributor.authorKHATAKHO, ASHISH-
dc.date.accessioned2023-07-31T06:12:32Z-
dc.date.available2023-07-31T06:12:32Z-
dc.date.issued2022-04-
dc.identifier.urihttps://elibrary.tucl.edu.np/handle/123456789/18844-
dc.descriptionIn our daily lives, we often listen to songs that we like and enjoy. However, there may be instances when we are in transit or at venues such as clubs or restaurants, and we hear a song playing in the background that catches our attention. We may desire to listen to this song again at a later time, but unfortunately, we may not be aware of the title of the song.en_US
dc.description.abstractIn our daily lives, we often listen to songs that we like and enjoy. However, there may be instances when we are in transit or at venues such as clubs or restaurants, and we hear a song playing in the background that catches our attention. We may desire to listen to this song again at a later time, but unfortunately, we may not be aware of the title of the song. As a result, we are unable to locate and listen to it again. Our project, titled “Music Recognition Using Deep Learning,” aims to provide a convenient solution for identifying songs that are heard in various locations. While there are several existing popular applications, such as Shazam, SoundHound, and Google Sound Search, which offer music recognition services, we conducted an in-depth study of papers related to these apps to identify appropriate technologies and algorithms for our project. Our project employs a deep neural network that leverages a contrastive learning approach for the purpose of song recognition. Initially, a large collection of songs is gathered and subjected to signal processing techniques, including Short Time Fourier Transform (STFT), mel filter bank, and decibel scale to generate log mel-spectrograms. These log mel-spectrograms are then fed into the neural network, which is trained to generate a fingerprint for each song at the segment level. These fingerprints are stored in a database.en_US
dc.language.isoenen_US
dc.publisherI.O.E. Pulchowk Campusen_US
dc.subjectmusic recognition,en_US
dc.subjectdeep learning in music,en_US
dc.subjectcontrastive learningen_US
dc.titleMUSIC RECOGNITION USING DEEP LEARNINGen_US
dc.typeReporten_US
local.institute.titleInstitute of Engineeringen_US
local.academic.levelBacheloren_US
local.affiliatedinstitute.titlePulchowk Campusen_US
Appears in Collections:Electronics and communication Engineering

Files in This Item:
File Description SizeFormat 
pradeep k khnal et al. be report electronics apr 2022.pdf1.89 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.