讨论了文本分类中特征降维的主要方法及其特点,分析了基于散度差准则的特征降维的原理和方法,在避开求逆矩阵问题的同时,通过对文本特征进行选择对文本特征集进行了第一次压缩,借助于加权散度差原理对特征集进行了二次抽取,在最低限度减少信息损失的前提下实现了特征维数的大幅度降低。试验结果表明,这种方法在文本分类上的效率较好。
The problem of feature dimension is one of the main problems in text classification. In the paper we bring forward a method of extracting the text features based on scatter difference and CHI statistics. Firstly, we analyse the primary feature reduction means and their characteristic in the text classification. Secondly, we analyse the principle and method of scatter difference criterion. Avoid calculating the inverse matrix, we realize the first feature reduction by using feature selection. Then we use a weighted-scatter difference to extract the spare features. At the precondition of lower information loss we reduce the feature dimension. Lastly, our test about text categorization shows that this method has a better precision.