该文旨在以HowNet为例,探讨在表示学习模型中引入人工知识库的必要性和有效性。目前词向量多是通过构造神经网络模型,在大规模语料库上无监督训练得到,但这种框架面临两个问题:一是低频词的词向量质量难以保证;二是多义词的义项向量无法获得。该文提出了融合HowNet和大规模语料库的义原向量学习神经网络模型,并以义原向量为桥梁,自动得到义项向量及完善词向量。初步的实验结果表明该模型能有效提升在词相似度和词义消歧任务上的性能,有助于低频词和多义词的处理。作者指出,借重于人工知识库的神经网络语言模型应该成为今后一段时期自然语言处理的研究重点之一。
This paper aims to address the necessity and effectiveness of encoding a human annotated knowledge base into a neural network language model,using HowNet as a case study.Traditional word embedding is derived from neural network language model trained on a large-scale unlabeled text corpus,which suffers from the quality of resulting vectors of low frequent words is not satisfactory,and the sense vectors of polysemous words are not available.We propose neural network language models that can systematically learn embedding for all the semantic primitives defined in HowNet,and consequently,obtain word vectors,in particular for low frequent words,and word sense vectors in terms of the semantic primitive vectors.Preliminary experimental results show that our models can improve the performance in tasks of both word similarity and word sense disambiguation.It is suggested that the research on neural network language models incorporating human annotated knowledge bases would be a critical issue deserving our attention in the coming years.