3D RECONSTRUCTION BASED VIRTUAL TOUR
Date
2023-04
Journal Title
Journal ISSN
Volume Title
Publisher
I.O.E. Pulchowk Campus
Abstract
Number of research and several methods have been proposed for 3D reconstruction
from 2D images. The first is a triangulation approach based on determining the
same points in images taken from different angles to approximate a point cloud in
3D space and then reconstructing the mesh. This is purely a computation-based
approach. Another approach is to redefine 3D reconstruction problems as recognition
problems and use the existing knowledge about 3D space and projection
to reconstruct, much like how humans do. This knowledge is approximated using
deep learning models. However, in these approaches, the mesh reconstruction
part is extremely expensive. This cost can be reduced by trying to reconstruct the
view rather than trying to reconstruct the mesh. Neural Radiance Field (NeRF)
has been used to generate novel views. NeRF represents a scene using a fullyconnected
deep network, whose input is a spatial location and viewing direction
and output is the volume density and view-dependent emitted radiance at that
spatial location. We synthesize views by querying 5D coordinates along camera
rays and use classic volume rendering techniques to project the output colors and
densities into an image. In this project, we have used the latter approach.
Description
Number of research and several methods have been proposed for 3D reconstruction
from 2D images. The first is a triangulation approach based on determining the
same points in images taken from different angles to approximate a point cloud in
3D space and then reconstructing the mesh.
Keywords
3D Reconstruction,, Point Cloud,, Deep Learning