在非平衡数据情况下,由于传统特征选择方法,如信息增益(Information Gain,IG)和相关系数(Correlation Coefficient,CC),或者不考虑负特征对分类的作用,或者不能显式地均衡正负特征比例,导致特征选择的结果下降.本文提出一种新的特征选择方法(Positive-Negative feature selection,PN),用于邻居词的选择,实现了文本中术语的自动抽取.本文提出的PN特征选择方法和CC特征选择方法相比,考虑了负特征;和IG特征选择方法相比,从特征t出现在正(负)训练文本的文本数占所有出现特征t的训练文本数比例的角度,分别显式地均衡了正特征和负特征的比例.通过计算特征t后面所跟的不同(非)领域概念个数占总(非)领域概念个数比值分别考察正、负特征t的重要性,解决了IG特征选择方法正特征偏置问题.实验结果表明,本文提出的PN特征选择方法优越于IG特征选择方法和CC特征选择.
The performance of traditional feature selection algorithms, e.g. IG and CC, will be decreased because of either without considering the negative features, or without combining the positive features and negative features explicitly on imbalanced data. In this paper,a novel feature selection algorithm,named PN (Positive-Negative feature selection) ,is proposed for term extraction. Comparing with CC,PN considers the negative features,which are quite valuable in imbalanced data. Comparing with IG, PN considers the positive feature and negative feature independently and explicitly, and the values of positive and negative features are adjusted by compute the proportion of number of terms followed by feature t,which solved the problem of the much larger values of positive features. The experimental results show that the performance of PN algorithm outperforms those of CC and IG.