提出了一种基于contourlet变换的多尺度图像质量评价方法.通过将参考图像与待测图像变换到YUV(YCr Cb)彩色空间,并使用其亮度分量,在contourlet域进行多尺度多方向分解获取不同尺度的图像特征.同时,结合视觉感知(JND),合并人眼无法区分的特征,根据合并后的对应子带特征计算了区域互信息(RMI),得到了对应子带特征的相似性;使用对比敏感函数(CSF)对不同子带相似性加权,得到了图像的最终评价指标.对LIVE(laboratory for image&video engineering)库中5种失真类型的982幅图像验证该算法,得到斯皮尔曼等级相关系数(SROCC)值为0.960 3,线性相关系数(CC)值为0.969 6,均方根误差(RMSE)值为5.658 4.实验结果表明,该评价方法能很好地对各种失真类型的图像进行评价,比峰值信噪比(PSNR)和结构相似性(SSIM)等算法的主客观一致性更好,更加符合人眼的视觉特性.
A contourlet transform based multiscale image quality assessment metric is proposed in this paper. At first,both reference images and test images are transformed into YUV( YCr Cb) color space and the luminance component is transformed into contourlet domain to select features from multiscale and multi-direction subbands. The features that could not be distinguished will be combined together by visual perception of just noticeable difference( JND). Then,regional mutual information( RMI) which is calculated by the combined corresponding subbands information is utilized to obtain the similarity of the corresponding subbands. Finally,we use contrast sensitivity function( CSF) to weight the similarity of the corresponding subbands,which is used to calculate the quality assessment score. The experiment results on LIVE( laboratory for image video engineering) database,which contains 5 different types of distortions and 982 distorted images,show that the proposed method can obtain the spearmen rank order correlation coefficient( SROCC) value to be 0. 960 3,the correlation coefficient( CC) value to be0. 969 6 and the root mean square error( RMSE) value to be 5. 658 4. According to the experiment results,the proposed method can efficiently evaluate various distortions. It shows a higher consistency between subjective and objective evaluations and conforms to visual property of human eyes than both peak signal-to-noise ratio( PSNR) and structural similarity( SSIM)