单隐藏层前馈神经网络(Single-hidden Layer Feedforward Neural Network,SLFN)已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但传统学习方法的速度远远不能满足实际的需要,成为制约其发展的主要瓶颈.产生这种情况的两个主要原因是:(1)传统的误差反向传播方法(Back Propagation,BP)主要基于梯度下降的思想,需要多次迭代;(2)网络的所有参数都需要在训练过程中迭代确定.因此算法的计算量和搜索空间很大.针对以上问题,借鉴ELM的一次学习思想并基于结构风险最小化理论提出一种快速学习方法(RELM),避免了多次迭代和局部最小值,具有良好的泛化性、鲁棒性与可控性.实验表明RELM综合性能优于ELM、BP和sVM.
SLFNs(Single-hidden Layer Feed forward Neural networks) have been wtctely applleu in many fields including pattern recognition, automatic control, data mining etc. However, the traditional learning methods can not meet the actual needs due to two main reasons. Firstly, the traditional method is mainly based on gradient descent and it needs multiple iterations. Secondly, all of the network parameters need to be determined by iteration. Therefore, the computational complexity and searching space will increase dramatically. To solve the above problem, motivated by ELM's one-time learning idea, a novel algorithm called Regularized Extreme Learning Ma- chine (RELM) based on structural risk minimization and weighted least square is proposed in this paper. The algorithm not only avoids a number of iterations and the local minimum, but also has better generalization, robustness and controllability than the original ELM. Additionally, experi- mental results have shown that RELM' overall performance is also better than BP and SVM.