针对神经网络结构设计的难点,定义神经网络连接权值的e指数信息熵,该熵克服了Shannon熵固有的缺点,但与Shannon熵对不确定性的描述具有相同的效果。将其作为惩罚项引入神经网络学习的目标函数中训练神经网络,由于熵函数特有的属性,对神经网络中小的连接进行惩罚而对大的连接进行鼓励,从而使神经网络中小的权值迅速收敛到零值附近。通过删除零值附近的权连接进而达到简化神经网络结构的目的。典型非线性函数逼近的仿真试验结果表明,该修剪算法在保证其逼近性能的同时,可以简化神经网络结构。
A pruning algorithm for neural network based on information entropy is proposed in this paper. In the proposed algorithm, a new e-exponential information entropy of neural networks' connection weights is defined based on the theory of Shannon's information entropy. Although both information entropies have approximately the same description on uncertainty, the new e-exponential information entropy overcomes the inherent drawbacks of Shannon's entropy. By introducing newly defined entropy as a penalty-term into normal objective function, the minor weight connection is punished and the major weight connection is encouraged due to the unique property of entropy function. Therefore, a simple architecture of neural networks can be achieved by deleting the connection weights whose values are approximately equal to zero. The simulation result using a typical non-linear function approximation shows that a simple architecture of neural networks can be achieved and at the same time the performance of approximation is warranted.