提出一种用于智能机器人的视觉注意计算模型.受生物学启发,该模型模仿人类自下而上和自上而下两种视觉选择性注意过程.通过提取输入图像的多尺度下的多个底层特征,在频域分析各特征图的幅度谱,在空域构造相应的特征显著图.根据显著图,计算出注意焦点的位置和注意区域的大小,结合给定的任务在各注意焦点之间进行视觉转移.在多幅自然图像上进行实验,并给出相应的实验结果、定性和定量分析.实验结果与人类视觉注意结果一致,表明该计算模型在注意效果、运算速度等方面有效.
A new computational model of visual attention for intelligent robots is proposed. Motivated by biology, this model simulates both the bottom-up and top-down human visual selective attention mechanisms. By extracting multiple lowlevel features of the input image with multiple scales and analyzing the amplitude spectra of these feature maps in frequency domain, the corresponding saliency map is constructed in spatial domain. Based on the saliency map, the position and size of potential focuses of attention are computed. According to the given target, attention is changed among the focuses of attention. The model is tested on many natural images. Experiment results and qualitative and quantitative analysis are presented. The proposed results are consistent with human visual attention results. It is indicated that this model is effective in attention effect and computational speed.