Chandra shekar, Akash
STRUCTURE-FROM-MOTION AND RGBD DEPTH FUSION
1 online resource (65 pages) : PDF
University of North Carolina at Charlotte
Abstract This article describes a technique to augment a typical RGBD sensor by integrating depth estimates obtained via Structure-from-Motion (SfM) with depth measurements from an RGBD sensor. Limitations in the RGBD depth sensing technology prevent capturing depth measurements in four important contexts: (1) distant surfaces (>8m), (2) dark surfaces, (3) brightly lit indoor scenes and (4) sunlit outdoor scenes. SfM technology computes depth via multi-view reconstruction from the RGB image sequence alone. As such, SfM depth estimates do not suffer the same limitations and may be computed in all four of the previously listed circumstances. This work describes a novel fusion of RGBD depth data and SfM-estimated depths to generate an improved depth stream that may be processed by one of many important downstream applications such as robot localization, robot mapping, robot navigation, object tracking, pose estimation, and object recognition.This approach is demonstrated on sequences of images that transition from indoor scenes, where the RGBD depth sensor can function, to outdoor scenes, where the RGBD depth sensor fails.
3D RECONSTRUCTIONAUGMENTED REALITYCOMUPUTER VISIONRGBDSFMSLAM
Willis, AndrewShin, Min
Akella, SrinivasTabkhi, Hamed
Thesis (M.S.)--University of North Carolina at Charlotte, 2018.
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). For additional information, see http://rightsstatements.org/page/InC/1.0/.
Copyright is held by the author unless otherwise indicated.