本文对三层BP神经网络中带有惩罚项的在线梯度学习算法的收敛性问题进行了研究,在网络训练每一轮开始执行之前,对训练样本随机进行重排,以使网络学习更容易跳出局部极小,文中给出了误差函数的单调性定理以及该算法的弱收敛和强收敛性定理。
In this paper, we present and discuss an online gradient method with a penalty term for three- layer BP neural networks. The input training examples are reset stochastically before the performance of each batch so that the learning is easy to jump off from local minima. The monotonicity and the convergence of deterministic nature are proved.