许多最近的最先进的图象检索途径基于 Bag-of-Visual-Words 由使量子化用一套视觉词为一幅图象建模并且代表本地筛(规模不变的特征变换) 特征。本地特征并且不得已地的歧视的力量引起在图象之间的许多假本地火柴的特征量子化还原剂,它降级检索精确性。过滤那些假火柴,在视觉词之中的几何上下文流行地为几何一致性的确认被探索了。然而,存在与全球或本地的几何确认学习计算地是昂贵的或完成有限精确性。为了处理这个问题,在这份报纸,我们集中于部分副本网图象检索,并且建议一个计划为视觉匹配的确认编码空间上下文。一个有效仿射的改进计划被建议精制确认结果。部分副本的网图象的实验寻找,用 100 万幅图象的一个数据库,表明建议途径的有效性和效率。一个 10-million 图象数据库上的评估进一步揭示我们的途径的可伸缩性。
Many recent state-of-the-art image retrieval approaches are based on Bag-of-Visual-Words model and represent an image with a set of visual words by quantizing local SIFT(scale invariant feature transform) features. Feature quantization reduces the discriminative power of local features and unavoidably causes many false local matches between images, which degrades the retrieval accuracy. To filter those false matches, geometric context among visual words has been popularly explored for the verification of geometric consistency. However, existing studies with global or local geometric verification are either computationally expensive or achieve limited accuracy. To address this issue, in this paper, we focus on partialduplicate Web image retrieval, and propose a scheme to encode the spatial context for visual matching verification. An efficient affine enhancement scheme is proposed to refine the verification results. Experiments on partial-duplicate Web image search, using a database of one million images, demonstrate the effectiveness and efficiency of the proposed approach.Evaluation on a 10-million image database further reveals the scalability of our approach.