为加速在刺神经网络( SNN )与时间的编码范例由 SpikeProp 算法学习监督,三学习率改编方法(启发式的规则,三角洲三角洲规则,和 delta-bar-delta 统治)它习惯于加快在人工的神经网络训练,被用来为前馈控制 SNN 开发训练算法。这些算法的表演被四个实验调查:古典 XOR (独占或) 问题,艾丽斯数据集,在田纳西伊斯门过程的差错诊断,和泊松分离尖铁训练。结果证明所有三个学习的率改编方法能与原来的 SpikeProp 算法相比加快 SNN 的集中。而且,如果适应学习率与动量术语在联合被使用,二修正将以完成快速、稳定的集中的一个有益的方法平衡对方。在三个学习的率改编方法, delta-bar-delta 统治执行最好。有动量的 delta-bar-delta 方法有最快的集中率,训练进程的最大的稳定性,和网络学习的最大的精确性。在这份报纸的建议算法是简单、有效、为 SNN 的实际应用程序因而珍贵。
For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs), three learning rate adaptation methods (heuristic rule, delta-delta rule, and delta-bar-delta rule), which are used to speed up training in artificial neural networks, are used to develop the training algorithms for feedforward SNN. The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem, Iris dataset, fault diagnosis in the Tennessee Eastman process, and Poisson trains of discrete spikes. The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm. Furthermore, if the adaptive learning rate is used in combination with the momentum term, the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence. In the three learning rate adaptation methods, delta-bar-delta rule performs the best. The delta-bar-delta method with momentum has the fastest convergence rate, the greatest stability of training process, and the maximum accuracy of network learning. The proposed algorithms in this paper are simple and efficient, and consequently valuable for practical applications of SNN.