提出了一种空间上下文描述与匹配方法,有效地提高了基于视觉关键词的图像检索中目标对象的可区分性.首先通过定义具有仿射协变性的空间邻域,得到自适应的多层空间上下文描述;然后提出模糊堆土机距离度量方法计算空间上下文相似性,以减少区块特征聚类错误对空间关系匹配的影响.在预处理阶段,基于熵值和自相似度进行噪声区块过滤.与已有方法相比,该方法的平均检索精度相对提高了10.8%.
A new method of describing and matching spatial context is proposed to effectively improve the distinguishability of objects in visual words-based image retrieval. Firstly, we define an affine covariant spatial neighborhood to obtain effective spatial context description. Secondly, fuzzyearth mover's distance metric is presented to calculate their similarity, and reduces the effect of features' false clustering. Besides, noise regions are filtrated based on entropy and self-similarity as pretreatment. Experimental results demonstrate that, compared with existing methods, the relative improvement of our method's average retrieval precision is 10.8 %.