目的光场相机可以通过一次拍摄,获取立体空间中的4D光场数据,渲染出焦点堆栈图像,然后采用聚焦性检测函数从中提取深度信息。然而,不同聚焦性检测函数响应特性不同,不能适应于所有的场景,且现有多数方法提取的深度信息散焦误差较大,鲁棒性较差。针对该问题,提出一种新的基于光场聚焦性检测函数的深度提取方法,获取高精度的深度信息。方法设计加窗的梯度均方差聚焦性检测函数,提取焦点堆栈图像中的深度信息;利用全聚焦彩色图像和散焦函数标记图像中的散焦区域,使用邻域搜索算法修正散焦误差。最后利用马尔可夫随机场(MRF)将修正后的拉普拉斯算子提取的深度图与梯度均方差函数得到的深度图融合,得到高精确度的深度图像。结果在Lytro数据集和自行采集的测试数据上,相比于其他先进的算法,本文方法提取的深度信息噪声较少。精确度平均提高约9.29%,均方误差平均降低约0.056。结论本文方法提取的深度信息颗粒噪声更少;结合彩色信息引导,有效修正了散焦误差。对于平滑区域较多的场景,深度提取效果较好。
Objective : A light field camera can obtain 4D light field data from stereoscopic space and generate focal stacks with one shot. Depth information can be extracted by using a focus detection function. However, generalizing varying scenes is difficult because of the distinct response characteristics of focus detection functions. Furthermore, most of the ex- isting methods lead to large defocusing errors, which are not robust in practical usage. In this paper, we present a new depth extraction method based on light field images ( i. e. , focal slices and all-focus images) to obtain high-accuracy depth information. Method We develop a windowed focusing detection function based on gradient mean square error to ex- tract depth information. Thereafter, we correct the defocusing errors by using the local search method for the area marked with the defocusing function. Finally, we synthesize the depth images to improve accuracy. Result The experiments on the Lytro dataset and our own data show that our approach achieves higher accuracy with less noise than other state-of-the-art methods. Precision increases by approximately 9. 29% , and MSE decreases by approximately 0. 056 compared with other advanced methods. Conclusion The use of the windowed gradient mean square error function to extract depth information produces less speck noise. By using the color information from all-focus images, our approach can correct the defocusing er- ror. Finally, the depth edges fused in an MRF framework are clear and maintain good consistency with the color image. The depth estimation by our approach is better than other methods for low texture images.