目的 针对现有的跨场景服装检索框架在服装躯干部分检索问题上,因服装款式识别优化存在服装信息丢失和跨场景款式识别的问题,提出一种新的服装分割方法和基于跨域字典学习的服装款式识别。方法 首先,提出基于超像素融合和姿态估计相结合的方法分割出完整的服装,用完整的服装进行检索可以最大限度地保留服装信息。然后,在服装款式识别时,通过学习服装商品数据集与日常服装图像数据的中间数据集字典,使其逐渐适应日常服装图像数据的方式,调节字典的适应性,进而提高不同场景下的服装款式识别的准确性。另外,由于目前国际缺少细粒度标注的大型服装数据库,本文构建了2个细粒度标注的服装数据库。结果 在公认的Fashionista服装数据集及本文构建的数据库上验证本文方法并与目前国际上流行的方法进行对比,本文方法在上下装检索中精度达到62.1%和63.4%,本文方法在服装分割、款式识别,检索方面的准确度要优于当前前沿的方法。结论 针对现有的跨场景服装检索框架分割服装不准确的问题,提出一种新的层次服装过分割融合方法及域自适应跨域服装款式识别方法,保证了服装的完整性,提高了跨场景服装检索及款式识别的精度,适用于日常服装检索。
Objective The current cross-scenario clothing retrieval framework is based on the torso of the body. The retrieval results are refined by clothing recognition, which leads to lost clothing information and cross-scenario clothing recognition. We proposed a new clothing segmentation algorithm and clothing recognition method, which were based on domain-adaptive dictionary learning.Methods First, we proposed an over-segmentation hierarchical fusion algorithm with pose estimation to segment intact clothing items and retrieve similar clothing images. During clothing recognition, the intermediate domain dictionaries between product clothing dataset and daily clothing dataset were sequentially learned to improve the accuracy of classifiers and the adaptability of the dictionary for clothing style recognition in different scenarios.Results To verify the efficiency of the proposed method, experiments were performed with Fashionista dataset, which a large public datasets, and our developed datasets. Experiments showed that the precision of the proposed method was 62.1% and 63.4% for the upper body and lower body, respectively, which indicated that the proposed method outperformed state-of-the-art methods in terms of clothing segmentation, clothing recognition, and clothing retrieval.Conclusion To address the problem of current cross-scenario clothing retrieval, we proposed a novel hierarchical fusion clothing segmentation algorithm and domain-adaptive dictionary learning to recognize clothing attributes. The proposed method ensures the integrity of clothing and improves the precision of cross-scenario retrieval and style recognition.