针对深度视频压缩中存在的大量视觉冗余,提出了一种基于中心凹恰可觉察失真(FJND)模型的深度视频编码方法。首先通过左右通道的彩色和深度视频绘制虚拟视点图像,并利用FJND模型得到虚拟视点图像的FJND,然后根据深度视频中几何偏移和深度值失真之间的关系确定深度视频左通道的可允许失真,将深度视频左通道分区域采用自适应量化参数进行编码,并对右通道的深度视频根据与左通道量化参数的关系进行编码。实验结果表明,本文方法在相同码率下,虚拟视点图像质量平均提高0.48dB;在相同虚拟视点图像的绘制质量下,深度视频编码码率平均减少26%。
To reduce visual redundancies in multiview depth videos, a foveated just-noticeable-distortion (FJND) model based multiview depth video coding method is proposed in this paper. Firstly, color and depth data in the left and fight views are used to synthesize the intermediate view. The foveated just-noticeable-distortion videos in the intermediate view are estimated based on the FJND model. Then, the allowance-maximum distortion in the depth map of the left view is defined and calculated mainly according to the linear relationship between the depth map distortion and the geometry distortion in the synthesized view. Subsequently,it is divided into different regions and coded with dynamical quantization parameters. As for the depth map of the right view,it is coded by adopting the relationship of the quantization parameters between the left and fight views. Experimental results show that with the proposed approach the peak signal-to-noise ratio (PSNR) of the synthesized views can be increased by 0.48 dB averagely when the coding bit rate is the same and the average bit rate savings of 26% can be achieved compared with traditional method if the PSNR of the intermediate views is the same.