在多标记学习框架下,每个样本由单个实例进行表示并同时对应于多个概念标记.已有的多标记懒惰学习算法并未充分考察样本多个标记之间的相关性,因此其泛化性能将会受到一定程度的不利影响.针对上述问题,提出一种新型多标记懒惰学习算法IMLLA.该算法首先找出测试样本在训练集中与各个概念类对应的近邻样本,然后基于近邻样本的多标记信息构造一个标记计数向量,并提交给已训练的线性分类器进行预测.由于IMLLA在对每个概念类进行预测时利用了蕴含于其他概念类中的信息,因而充分考察了样本多个标记之间的相关性.在人工数据集以及真实世界数据集上的实验表明,IMLLA算法的性能显著优于常用的多标记学习算法.
Multi-label learning deals with the problem where each example is represented by a single instance while associated with multiple class labels. A number of multi-label learning approaches have been proposed recently, among which multi-label lazy learning methods have shown to yield good generalization abilities. Existing multi-label learning algorithm based on lazy learning techniques does not address the correlations between different labels of each example, such that the performance of the algorithm could be negatively influenced. In this paper, an improved multi-label lazy learning approach named IMLLA is proposed. Given a test example, IMLLA works by firstly identifying its neighboring instances in the training set for each possible class. After that, a label counting vector is generated from those neighboring instances and fed to the trained linear classifiers. In this way, information embedded in other classes is involved in the process of predicting the label of each class, so that the inter-label relationships of each example are appropriately addressed. Experiments are conducted on several synthetic data sets and two benchmark real-world data sets regarding natural scene classification and yeast gene functional analysis. Experimental results show that the performance of IMLLA is superior to other well established multi-label learning algorithms, including one of the state-of-the-art lazy-style multi-label leaner.