对IB(Inverse Boosting)神经网络集成算法进行了研究,提出了IB算法的改进算法IB+算法。改进算法继承了IB算法的逆向样本分布调整策略,并在训练的过程中将部分已训练好的个体子网进行中间层网络集成,利用该中间层集成网络生成新的训练样本分布。实验结果表明,对于逆向权值分布的Boosting类算法,个体子网之间的关联度对网络集成后的泛化性能影响很小,减小个体网络的泛化误差将使集成后的泛化性能提高。
This paper gives a research on IB(inverse Boosting)algorithm and proposes an improved version of IB called IB+.Both IB and IB+ algorithm will enhance the weight of samples which have been classified correctly during the training process.The most difference between IB and IB+ is the method to update the weight of training samples in each iteration.For IB algorithm,the weight of training samples will be updated according to an inverse error vector which was decided by the performance of the last trained single net.However the IB+ algorithm adopts a mesosphere ensemble net instead of a single net to determine the inverse error vector thus a more suitable sample distribution will be achieved.Further experiment results show that the performance of ensemble net which was developed using an inverse error vector to create new sample distribution will be decided by the performance of base single net not the degree of correlation.