Please use this identifier to cite or link to this item: https://elibrary.tucl.edu.np/handle/123456789/18856
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKOJU, BISHAD-
dc.contributor.authorJYAKHWA, GAURAV-
dc.contributor.authorNYOUPANE, KRITI-
dc.contributor.authorMANANDHAR, LUNA-
dc.date.accessioned2023-07-31T07:19:20Z-
dc.date.available2023-07-31T07:19:20Z-
dc.date.issued2023-04-
dc.identifier.urihttps://elibrary.tucl.edu.np/handle/123456789/18856-
dc.descriptionNumber of research and several methods have been proposed for 3D reconstruction from 2D images. The first is a triangulation approach based on determining the same points in images taken from different angles to approximate a point cloud in 3D space and then reconstructing the mesh.en_US
dc.description.abstractNumber of research and several methods have been proposed for 3D reconstruction from 2D images. The first is a triangulation approach based on determining the same points in images taken from different angles to approximate a point cloud in 3D space and then reconstructing the mesh. This is purely a computation-based approach. Another approach is to redefine 3D reconstruction problems as recognition problems and use the existing knowledge about 3D space and projection to reconstruct, much like how humans do. This knowledge is approximated using deep learning models. However, in these approaches, the mesh reconstruction part is extremely expensive. This cost can be reduced by trying to reconstruct the view rather than trying to reconstruct the mesh. Neural Radiance Field (NeRF) has been used to generate novel views. NeRF represents a scene using a fullyconnected deep network, whose input is a spatial location and viewing direction and output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. In this project, we have used the latter approach.en_US
dc.language.isoenen_US
dc.publisherI.O.E. Pulchowk Campusen_US
dc.subject3D Reconstruction,en_US
dc.subjectPoint Cloud,en_US
dc.subjectDeep Learningen_US
dc.title3D RECONSTRUCTION BASED VIRTUAL TOURen_US
dc.typeReporten_US
local.institute.titleInstitute of Engineeringen_US
local.academic.levelBacheloren_US
local.affiliatedinstitute.titlePulchowk Campusen_US
Appears in Collections:Computer Engineering

Files in This Item:
File Description SizeFormat 
Bishad Koju et al. be report computer apr 2023.pdf35.25 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.