1 前言线性输出的BP神经网络已被成功应用于解决模式分类和函数逼近等问题[1-2],有关 BP学习算法的收敛性研究可参见文献[3-5].但是,传统BP算法具有两大缺点:收敛速度慢和容易陷入局部极小.一种常见的改进策略是在它的权值更新规则中加入动量项,即上一步权值改变量的某一倍数,使得当前步的权值改变不仅受到梯度下降方向的影响,
The convergence of BP algorithm with momentum for neural networks with linear output is considered. The momentum coefficients are chosen in an adaptive manner to accelerate and stabilize the learning procedure of the network, and the corresponding weak convergence result is proved.