提出一种基于单目视觉的致密场景重建方法,以实现对环境快速,准确地三维立体化建模。该方法针对自由式手持单目相机,在并行跟踪与地图创建(PTAM)算法框架下准确地实现相机的自定位。在此基础上,选取关键帧处图像序列,构造变分模式下深度估计模型;运用离散空间采样法获取初始深度图,借助于原始对偶算法实现该深度模型的优化,并结合相机投影模型估计待求解场景的三维模型。在统一计算设备架构(CUDA)下,利用图形处理器(GPU)进一步实现了深度估计算法的并行优化,显著提高了算法处理的实时性。真实场景下实验结果验证了所提算法的有效性与可行性。
A monocular vision based three dimensional (3D) dense scene reconstruction technique is presented to achieve fast and accurate 3D stereoscopic modeling in the real environment. This proposed approach localizes accurately a free moving camera in the framework of parallel tracking and mapping (PTAM) algorithm. Based on this self-localization, a variational depth map estimation model is established by using a bundle of image around the keyframe. Discrete depth space sampling strategy is proposed to initialize the variational depth map model and primal dual algorithm is presented to optimize the model afterward. Subsequently, the final 3D scene model can be estimated by integrating the projective camera imaging model. Under the compute unified device architecture (CUDA), the algorithm is optimized in parallel mode by using the graphic processing unit (GPU) hardware, and its real-time performance is significantly improved. The experimental results conducted in realistic scenario demonstrats the feasibility and effectiveness of the proposed algorithm.