位置:成果数据库 > 期刊 > 期刊详情页
一种基于双通道LDA模型的汉语词义表示与归纳方法
  • ISSN号:0254-4164
  • 期刊名称:《计算机学报》
  • 时间:0
  • 分类:TP391[自动化与计算机技术—计算机应用技术;自动化与计算机技术—计算机科学与技术]
  • 作者机构:[1]中国科学院自动化研究所模式识别国家重点实验室,北京100190, [2]中国科学院脑科学与智能技术卓越创新中心,北京100190
  • 相关基金:国家自然科学基金(61333018); 中国科学院战略性先导科技专项(B类)课题(XDB02070007)资助
中文摘要:

语义记忆是人类理解自然语言的基础.人类理解语言的过程可以看作是对词义进行编码、对语义记忆进行检索,进而对词义进行解码的过程.因此,对词义进行合理地表示是计算机理解语言的关键步骤.该文总结分析了已有的词义表示方法与人脑词义表征的关系,针对汉语词汇的歧义现象,重点阐述了如何从歧义词所处的上下文中最大限度地自动获取关于歧义词的词义信息,并将这些信息整合,通过一系列的特征集合表示歧义词的词义.具体地说,该文将出现在歧义词上下文语境中有明确含义的实词作为模型的输入,同时在上下文中获取可以表示歧义词词义的其他特征,最终将这两种信息通过贝叶斯概率模型整合在一起,共同实现歧义词的词义表示和归纳.实验表明,该文提出的方法可以得到更好的词义表示和归纳效果.

英文摘要:

Semantic memory is the foundation of human language understanding.Human brain needs to encode,retrieve and decode word meanings for language understanding.The semantic representation is the key step to develop natural language processing systems.Some studies have shown that the formation of concepts is affected by the interaction of human brain and the real world,and the concepts in human brain contain rich forms of information including vision,perception and language.Based on the distributional hypothesis which states that"similar words occur in similar contexts",the concepts are represented as vectors by calculating the co-occurrence frequency of each word and its statistical features.In this way,word representation in computer can be seen as the semantic representation in human brain.This article mainly focuses on how to represent word senses and do word senses induction in natural language text.We first investigate the relation between computational models of word representation and semantic representation in human brain.Based on word similarity experiments,we have verified that word representations by statistical methods can capture the relationship of similarity between words in human brain.In the view of Chinese word sense disambiguation,this paper studies the methods to find the semantic features of ambiguities from context automatically.Bayesian probability model can learn wordrepresentations and do word sense induction together.Specifically,in order to do word sense induction,Bayesian probability model clusters words with the same topic.The words within the same topic can be seen as the representation of the topic.In the task of word sense induction,the topics are mapped to word senses in evaluation.Therefore,we use latent Dirichlet allocation model to learn word sense representation from large scale of corpus without annotation.On the basis of word sense representation,we do word sense induction on the testing data.In order to better capture the meaning of ambiguous words,this article builds a Dual

同期刊论文项目
同项目期刊论文
期刊信息
  • 《计算机学报》
  • 北大核心期刊(2011版)
  • 主管单位:中国科学院
  • 主办单位:中国计算机学会 中国科学院计算技术研究所
  • 主编:孙凝晖
  • 地址:北京中关村科学院南路6号
  • 邮编:100190
  • 邮箱:cjc@ict.ac.cn
  • 电话:010-62620695
  • 国际标准刊号:ISSN:0254-4164
  • 国内统一刊号:ISSN:11-1826/TP
  • 邮发代号:2-833
  • 获奖情况:
  • 中国期刊方阵“双效”期刊
  • 国内外数据库收录:
  • 美国数学评论(网络版),荷兰文摘与引文数据库,美国工程索引,美国剑桥科学文摘,日本日本科学技术振兴机构数据库,中国中国科技核心期刊,中国北大核心期刊(2004版),中国北大核心期刊(2008版),中国北大核心期刊(2011版),中国北大核心期刊(2014版),中国北大核心期刊(2000版)
  • 被引量:48433