循环神经网络语言模型能解决传统N-gram模型中存在的数据稀疏和维数灾难问题,但仍缺乏对长距离信息的描述能力。为此文中提出一种基于词向量特征的循环神经网络语言模型改进方法。该方法在输入层中增加特征层,改进模型结构。在模型训练时,通过特征层加入上下文词向量,增强网络对长距离信息约束的学习能力。实验表明,文中方法能有效提高语言模型的性能。
The recurrent neural network language model( RNNLM) solves the problems of data sparseness and dimensionality disaster in traditional N-gram models. However, the original RNNLM is still lack of long dependence due to the vanishing gradient problem. In this paper, an improved method based on contextual word vectors is proposed for RNNLM. To improve the structure of models, a feature layer is added into the input layer. Contextual word vectors are added into the model with feature layer to reinforce the ability of learning long-distance information during the training. Experimental results show that the proposed method effectively improves the performance of RNNLM.