Browsing by Subject "Deep Learning,"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item ANOMALY DETECTION IN SURVILLENCE VIDEOS(I.O.E. Pulchowk Campus, 2023-05) GURAGAIN, JIWAN PRASAD; SHRESTHA, KUSHAL; KUNWAR, LAXMAN; SUBEDI, YAMANAnomaly detection with weakly supervised video-level labels is typically formulated as a multiple instance learning (MIL) problem, in which we aim to identify snippets containing abnormal events, with each video represented as a bag of video snippets. Although current methods show effective detection performance, their recognition of the positive instances, i.e., rare abnormal snippets in the abnormal videos, is largely biased by the dominant negative instances, especially when the abnormal events are subtle anomalies that exhibit only small differences compared with normal events. This issue is exacerbated in many methods that ignore important video temporal dependencies. To address this issue, we use add information from the optical flow which captures the temporal relation between successive frames in a video. In this project, we explored the field of video anomaly detection and reviewed existing literature on the subject, as well as related topics such as action recognition and optical flow extraction.Item Automated Heart Arrhythmia Classification From Electrocardiographic Data Using Deep Neural Networks(Pulchowk Campus, 2021-09) Ghimire, SumitaArrhythmia is the medical condition where heart beats in an irregular pattern. Arrhythmia is one of the common sources of the Cardio Vascular diseases. To survive from the arrhythmia, the keys are early detection and timely treatment. ECG stands as a diagnostic tool for the detection of the arrhythmia. Human intervention to the ECG is error prone as well as tedious. With the help of the development of the technology, cost effective automated arrhythmia detection framework can be deployed. There are many machine learning as well as deep learning models which can effectively differentiate among various types of heartbeats. Various deep learning models has shown that there is an ease way for predicting arrhythmia which do not require feature engineering and is effective. In order to build automated heartbeat classification model several factors has to be considered which includes data quality, heartbeat segmentation range, data imbalance problem, intra and inter-patients variations and identification of supraventricular ectopic heartbeats from normal heartbeats. This thesis incorporates all of these challenges. In this method, a hybrid method of neural network was deployed. Features were extracted by the two CNNs having two filter sizes. RNN which is BiLSTM was used to classify the ECG signals. Dual channel CNN was used to extract both the temporal as well as frequency patterns. The extracted features were added with the RR information before giving the input to the RNN that are mainly done to classify between the S-type and N-type heartbeats. In particular, BiLSTM learns and extracts hidden temporal dependency between the heartbeats by processing the input RR interval sequence in both the directions. Instead of using raw individual RR-intervals, mutual-connected temporal information provides stronger and more stable support for identifying the S-type heartbeats. The loss used is the focal loss to handle the class imbalance. The results prove that the research of heartbeat classification presented in this thesis brings practical ideas and solutions to the arrhythmia detection. Accuracy of the model presented in this thesis was 93%.Item DEEP LEARNING IN SPATIOTEMPORAL FETAL CARDIAC IMAGING(Pulchowk Campus, 2021-08) NIDHI, DIPAK KUMARFetal echocardiography is a standard diagnostic tool used to evaluate and monitor fetuses with a compromised cardiovascular system associated with a number of fetal conditions. Deep learning is a computer technology which can perform specific tasks with specific goals. Deep learning techniques is used to evaluate fetal cardiac ultrasound cine loops and improve the evaluation of fetal abnormalities. In this study, I implemented convolutional neural network and recurrent neural network as CNN+LSTM, CNN+GRU and 3DCNN, deep learning models for the processing and classification of ultrasonographic cine loops into various classes. The CNN+LSTM, CNN+GRU, and 3D CNN algorithms were able to sort the fetal cardiac cine loops into 5 standard views with 92.63%, 94.99%, and 82.69% accuracy, respectively. Furthermore, the CNN+LSTM, CNN+GRU, and 3D CNN were able to accurately diagnose Tricuspid atresia (TA) and Hypoplastic left heart syndrome (HLHS) with 94.61%, 91.99%, and 86.54%, respectively. These deep learning-based algorithms found to be an effective tool for evaluating and monitoring normal and abnormal fetal heart cine loops.Item NEURAL AUDIO CODEC(I.O.E. Pulchowk Campus, 2023-04-30) BARAL, SUBODH; PANDEY, TAPENDRA; BURLAKOTI, ACHYUT; BARAL, SIJALNeural audio codecs that use end-to-end approaches have gained popularity due to their ability to learn efficient audio representations through data-driven methods, without relying on handcrafted signal processing components. This research paper evaluates the performance of Neural Audio Codec in comparison to traditional audio codecs Opus and EVS in terms of audio quality and efficiency. The study highlights the limitations of existing audio codecs in leveraging the abundant data available in the audio compression pipeline and proposes deep learning-based models as a potential solution. The paper reviews recent advancements in deep learning-based audio synthesis and representation learning and explores the potential of deep learning-based audio codecs in enhancing compression efficiency. The study also addresses the limitations of existing models, including slower training times and increased memory requirements, by releasing open-source code and pre-trained models for further research and improvement. Experimental results show that our approach has comparable performance to widely used commercial codec OPUS at low bitrate, and a slight drop in performance compared to current deep learning-based frameworks but at the expense of significant improvement in speed and memory requirements. We have released our code and pre-trained models at https://github.com/AchyutBurlakoti/Neural-Audio-Compression for further research and improvement.Item Self managed cloud computing and edge computing system using deep reinforcement learning(Pulchowk Campus, 2021-09) Shakya, SushilIn order to tackle the latency problems in sensitive applications implemented in IoT devices, the edge computing came into existence. On top of that, the idea of a mobile edge computing (MEC) network, in which such computationally heavy activities are computed in many edge servers deployed adjacent to mobile devices, has recently gained popularity. Unlike cloud server, the edge device has a nite computation capacity and it can't handle massive computation tasks. There needs to be a smart task o oading scheme in order to decide whether to execute the task in the local device itself, or o oad it to the edge device and process there or again send it further to the cloud server for processing. Also the algorithm should be able to utilize the available network bandwidth e ciently. The optimization of joint o oading decision and bandwidth allocation in a multi user, multi task, multi server environment can be formulated as a mixed integer non linear programming problem (MINLP) in MEC to preserve energy and maintain quality of service for wireless devices. MINLP is a NP-hard problem and the time complexity to solve it grows exponentially. In this research work, the power of deep learning and reinforcement learning has been applied to solve this MINLP problem under a fraction of a second which makes it suitable for real world usage. Further an end to end integrated edge and cloud computing system has been proposed that switches from one to another whenever required, and leverages the bene ts of both paradigm.