为了从含有噪声和空洞的低分辨率深度图重建出高质量、高分辨率深度图, 基于以下假设: 高分辨率深度图可由一组过完备基来稀疏表达, 低分辨率深度图是从高分辨率深度图中采样而得, 提出一种基于稀疏表示的高分辨率深度图重建算法. 首先通过结合低分辨率深度图和高分辨率彩图信息找出场景中深度变化平缓的各个区域; 然后仅以各个区域低分辨率深度图为约束, 在贝叶斯框架下求解得到最优的高维稀疏系数向量; 最后将重建的各个区域进行合并, 得到完整的高分辨率深度图. 在Middlebury 测试集上进行了实验, 量化指标的综合对比结果表明, 文中算法更接近真实值; 在真实场景深度图的重建结果的对比中, 该算法的优点体现得更为明显.
Low-resolution depth maps captured by consumer-level depth cameras are usually highly contaminated by noise and quantization error. In this paper, we consider the captured depth values to be samples from a high-resolution depth map which is sparsely approximated by linear combinations of atoms from an over-complete dictionary. By further combining a high-resolution color image of the same scene with the cor-rupted depth samples, we divide the scene into regions with depths changing smoothly. For every such region, our method infers the sparse coefficients in a Bayesian optimization framework with the depth samples as constraints, and then reconstructs the high-resolution depth map. It was shown that our method outperforms previous ap-proaches in both the quantifying assessment experiments on the Middlebury dataset and qualitative comparisons on real scene reconstructions.