文本分割在信息提取、文摘自动生成、语言建模、首语消解等诸多领域都有极为重要的应用.基于LDA模型的文本分割以LDA为语料库及文本建模,利用MCMC中的Gibbs抽样进行推理,间接计算模型参数,获取词汇的概率分布,使隐藏于片段内的不同主题与文本表面的字词建立联系.实验以汉语的整句作为基本块,尝试多种相似性度量手段及边界估计策略,其最佳结果表明二者的恰当结合可以使片段边界的识别错误率远远低于其它同类算法.
Text segmentation is very important for many fields including information retrieval, summarization, language modeling, anaphora resolution and so on. Text segmentation based on LDA models corpora and texts with LDA. Parameters are estimated with Gibbs sampling of MCMC and the word probability is represented. Different latent topics are associated with observable words. In the experiments, Chinese whole sentences are taken as elementary blocks. Variety of similarity metrics and several approaches of discovering boundaries are tried. The best results show the right combination of them can make the error rate far lower than other algorithms of text segmentation.