传统K最近邻(KNN)分类算法为了找出待分类文本的k个邻居,需要与样本空间中的每个样本向量作比较,当训练样本较多时,导致相似度计算次数过多,分类速度下降。为此,改进了传统KNN算法,将训练文本中相似度大的文本合并,称为一簇,并计算簇的中心向量。待分类文本先与每一簇的中心向量计算相似度,当相似度达到某个阈值时,再与簇中的每个文本计算相似度,在一定程度上减少了相似度计算次数,降低了算法的时间复杂度。根据同一特征出现在文本中的位置不同应具有不同的权重改进了传统的TF-IDF计算公式。
Traditional KNN arithmetic compares with every sample vector in sample space in order to find k neighbors of classification of the sample. This causes computing times too much and system performance degrades. So, the traditional KNN arithmetic, dusters training document with highly overlapping word is improved, central vector of cluster is gained. In the text classifcation process, first comparability is compared with central vector of each cluster, then comparability is compared with each document in cluster when comparability with central vector reach threshold. Computing times are reduced at a certain extent. At the same time, improve the IF-IDF formula so as to term' s position in the text is different, it should have difference weigh.