目的 随着Web2.0下海量共享图像的出现,如何获取图像具有描述力的精准区域标注具有重要研究意义。方法 提出一种基于区域语义多样性密度的图像标注框架,重点考虑区域间的视觉特征差异和空间结构差异。具体来说,基于距离相似度的特征多样性密度实现了区域特征语义标注;引入负相关示例的惩罚作用实现了区域空间关系语义及属性语义标注。结果 在部分NUS-WIDE和MSRC数据集上验证了方法的有效性,区域属性标注的正确率在80%以上,同时基于属性标注的图像检索的平均查准率达到82%。结论 实验结果表明,本文图像标注框架可以较精确地得到标注的相关语义区域和属性标注,能够有效解决区域标注问题。
Objective With the emergence of massive shared images in the Web2.0 era,it has become a significant research topic to get precise descriptive region-level annotations for images.Method In this paper,we propose a new image annotation algorithm based on region-semantic diverse density,focusing on the differences of visual feature and spatial structure among regions.In details,the algorithm uses diverse density method based on feature distance similarity and region spatial location,and introducing color,shape together with texture property annotations.Result Experiments on parts of the NUS-WIDE and MSRC datasets demonstrate that the proposed method is effective.The accuracy is more than 80% in property annotations.Furthermore,the average precision of image retrieval using property annotations is up to 82%.Conclusion The experiments results show that the proposed image annotation framework can get relevant semantic regions and property annotations more accurately,and effectively solves the problem of regional annotation.