k局部凸包分类方法通过改进k近邻算法在处理小样本问题时的决策边界而显著提高分类性能,k子凸包分类方法通过克服k凸包分类对类数和样本环状分布的敏感性而改善了分类性能。但是,该方法仍然对样本距离度量方法敏感,并且在k邻域内不同类的样本数经常严重失衡,导致分类性能下降。针对上述问题,文章提出了一种邻域k凸包分类方法,并通过引入距离度量学习和集成学习技术来提高算法对样本空间度量的鲁棒性。大量实验表明,文中提出的基于度量学习的邻域k凸包集成方法具有显著的分类性能优势。
The k-local convex distance nearest neighbor classifier(CKNN) corrects the decision bounda- ry of kNN when the amount of the training data is small, thus improving the performance of kNN. The k sub-convex-hull classifier(kCH) weakens the sensitivity of CKNN to the number of classes and the ring structure of samples distribution, hence improves the classification performance. But this method is still sensitive to the distance metric. Moreover, different types of samples in k nearest neighbors of a test instance are often seriously imbalanced, which leads to the decline of classification performance. In this paper, a neighbor k-convex-hull classifier(NCH) is proposed to address these problems. The robustness of the neighbor k-convex-hull classifier is improved by the techniques of metric learning and ensemble learning. Experimental results show that the proposed neighbor k-con- vex-hull classifier ensemble method, which is based on metric learning, is significantly superior to some state-of-the-art nearest neighbor classifiers.