本文提出了一种基于信息熵的领域术语抽取方法,在给定领域分类语料的前提下,该方法既考虑了领域术语在不同领域类别间分布的不均匀性,又考虑了其在特定领域类别内分布的均匀性,并针对语料的不平衡性进行了正规化.人工评测显示该方法能更准确有效地抽取领域术语.本文还将该算法应用于文本分类,用于代替传统特征选择算法,实验表明,该算法能够显著提高文本分类的精度.
A statistical method based on information entropy is proposed for domain-specific term extraction from domain comparative corpora. It takes into account the distribution of a candidate word among domains and within a certain domain. Normalization step is added into the extraction process to cope with unbalanced corpora. The proposed method characterizes attributes of domain-specific term more precisely and more effectively than previous term extraction approaches.Domain-specific terms are applied in text classification as the feature space.Experimental results indicate that it achieves better performance than traditional feature selection methods.