提出了一种新的基于分类的视觉显著性计算模型。运用频谱残差、全局亮度和颜色对比度分别检测图像的显著区域,随后将显著性检测看作一个图像标注问题,提出一种基于多尺度条件随机场(CRFs)的显著性融合算法,以产生标注结果。CRFs的参数以Tobii眼动跟踪结果为依据,通过最大似然估计算法学习出来。实验结果表明,该模型优于当前的8种典型算法,与心理学实验结果有较好的一致性。
A new visual saliency model based on images' and color contrast were first used to detect images' labeling was proposed. The spectral residual, global luminance salient regions, and then, a saliency fusion approach based on multiscale conditional random fields (CRFs) was present to generate the final result. According to Tobii eye tracking data, the optimal parameters of CRFs were trained by the maximum-likelihood estimation (MLE) method. The experimental results show that the proposed model is better than the eight state-of-the-art models.