该文针对目前压缩深度图时很少考虑直接地减少绘制失真的问题,探索一种以绘制视点为最小化失真的方法来压缩深度图,建立由有损深度编码引起深度变化造成的绘制视失真(不是由绘制方法引起)的估计模型。首先推导由深度编码造成每个像素深度变化在绘制上的影响,然后计入由视频的局部特性对绘制失真的影响因素,提出一种精确的基于区域特性的绘制失真模型。最后将视点合成失真模型引入到深度编码中率失真模型中来代替深度图像本身失真,减小合成视失真。仿真结果表明,所提出的方法可改进绘制虚拟视点的客观质量,提高深度图的编码效率和减少视点绘制的赝像。可比只考虑深度图失真的基于拉格朗日优化的模式选择方法提高约2 dB的PSNR增益。
Since compressing depth map using existing video coding techniques yields unacceptable distortions while rendering virtual views,the depth maps need to be compressed in a way that it minimizes distortions in the rendered views.A distortion model is proposed that approximates rendering distortions caused by depth changes on depth coding.First,relationships between distortions in coded depth map and rendered view are derived.Then,a region based video characteristics distortion model is proposed for precisely estimation distortion in view synthesis.Finally,the new distortion metric is used to select encoding mode decisions for Rate Distortion(RD) optimization,while reducing rendering distortions.Simulation results illustrate that proposed techniques improve the objective quality of the rendered virtual views by up to 2 dB over the Lagrange Optimization based mode selection technique that considers the distortions only in the depth map.