在视频编码的过程中,编码后的数据不可避免地会发生一定程度的失真。在这些失真的色点中会有一部分的色点跃出颜色空间的范围,因此在视频编码框架的重建部分使用了误差修正(clip)来修正这些误差点。现有的误差修正方法仅仅是对YUV数据中各个分量进行了简单的超出0~255范围的修正,而忽略了颜色空间变换后YUV的颜色空间模型的变化。针对该问题,提出了一种基于颜色空间最短距离的编码误差修正(mini—mumdistanceclip)新方法。该方法利用YUV颜色空间模型以及空间几何的理论,通过寻找这些误差点在该模型上的最短距离点来修正上述误差。采用H.264参考软件JMl3.0进行实验验证,测试结果表明,该方法较传统的修正方法在主观质量和客观性能上均有一定的提高,对有较高色彩质量需求的先进视频编码应用,如超高清视频、3DTV等有一定的贡献。
In the process of video coding, the encoded video data will inevitably have some distortion. Among the distorted YUV color points, some of them may get out of the max range of YUV color space, and a clip module is used to compensate this error to some extent. The clip method in most video coding standards just revise single component of the YUV color points by judging whether the component is Out of 0 - 255, and this ignores the actual model of YUV color space. Aiming at this is- sue, this paper proposed a novel clip algorithm based on color space model. The proposed algorithm utilized the YUV color space model and some geometry theories to compensate the above-mentioned out-of-range points by finding the minimum dis- tance points on the surface of YUV color space. By applying the proposed algorithm to JM software, the experiment resuhs show that it can improve video coding performance in both subjective and objective aspects comparing to the conventional clip method. And this algorithm should contribute a lot in advanced applications which requires high color quality such as uhra-HD and 3 DTV.