分词是汉语自然语言处理研究中非常重要的一个环节,在早先的研究中,最大熵模型和条件随机场(CRF)模型已经广泛运用到汉语自动分词的工作中。最大间隔马尔可夫网(M3N)模型是近年来由B.Taskar等人提出的一种新型结构学习模型。该文尝试将这一模型用于汉语分词建模并进行实验,实验结果显示,基于给定的训练语料与测试语料,分词精度可以达到95%,表明基于最大间隔马尔科夫网的汉语分词方法可以取得较高的分词精度,是一种有效的汉语分词方法。
Chinese Word Segmentation is a crucial step in the study of Chinese Natural Language Processing (NLP). In previous researches, the Maximum Entropy model and Conditional Random Field(CRF) model have been widely used in the study of Chinese Word Segmentation. ]n this paper, we will apply the M3N(Max Margin Markov Networks) model, a structural model introduced by B. Taskar, to Chinese Word Segmentation. Experiments based on certain training and testing corpus show that the M3N is a very useful Chinese Word Segmentation Method with a fairly high precision of 95 %.