在实际应用中,由于各种原因时常无法直接获得已标识反例,导致传统分类方法暂时失灵,因此,基于正例和未标识集的半监督学习顿时成了理论界研究的热点.研究者们提出了不同的解决方法,然而,这些方法都不能有效处理非平衡的分类问题,尤其当隐匿反例非常少或训练集中的实例分布不均匀时.因此,提出了一种基于KL距离的半监督分类算法——LiKL:依次挖掘出未标识集中的最可靠正例和反例,接着使用训练好的增强型分类器来分类.与其他方法相比,不仅提高了分类的查准率和查全率,而且具有鲁棒性.
In many real applications,it's often difficult or quite expensive to get labeled negative examples for learning,such as Web search,medical diagnosis,earthquake identification and so on. This situation makes the traditional classification techniques work ineffectively,because the precondition that every class has to own its labeled instances is not met. Therefore,the semi-supervised learning method from positive and unlabeled data becomes a hot topic in the literature. In the past years,researchers have proposed many methods,but they can't cope well with the imbalanced classification problem,especially when the number of hidden negative examples in the unlabeled set is relatively small or the distribution of training examples in the training set becomes quite different. In this paper,a novel KL divergence-based semi-supervised classification algorithm,named LiKL (i.e. semi-supervised learning algorithm from imbalanced data based on KL divergence),is proposed to tackle this special problem. The proposed approach firstly finds likely positive examples existing in the unlabeled set,and successively finds likely negative ones,followed by an enhanced logistic regression classifier to classify the unlabeled set. The experiments show that the proposed approach not only improves precision and recall,but also is very robust,compared with former work in the literature.