目前的分布估计算法(esti mation of distribution algorithms)中概率模型的学习或多或少存在着对先验知识的依赖,而这些先验知识往往是不可预知的.针对这一问题,文中提出采用集成学习(ensemble learning)的思想实现EDAs中概率模型结构和参数的自动学习,并提出了一种基于递增学习策略的连续域分布估计算法,该算法采用贪心EM算法来实现高斯混合模型(GMM)的递增学习,在不需要任何先验知识的情况下,实现模型结构和参数的自动学习.通过一组函数优化实验对该算法的性能进行了考查,并与其它同类算法进行了比较.实验结果表明该方法是有效的,并且,相比其它同类EDAs,该算法用相对少的迭代,可以得到同样或者更好的结果.
In Estimation of Distribution Algorithms proposed in published literatures, learning of probabilistic model is dependent more or less on the prior-knowledge of the structure of model, which is unavailable in the process of evolutionary optimization. This paper proposes a new idea, which learns probabilistic model in EDAs by an approach similar to ensemble learning in machine learning, to implement automatic learning of both model parameter and model structure. According to this idea, a new EDAs for continuous optimization based on progressive learning of Gaussian Mixture Model is proposed. A greedy EM algorithm is adopted to estimation GMM in a progressive manner, which has the ability of learning the model structure and parameters automatically without any requirement of prior knowledge. A set of experiments on selected function optimization problems are performed to evaluate, and to compare with other EDAs, the efficiency and performance of the new algorithm. The experimental results confirm the feasibility and effect of the idea, and also show that, with a relative small number of generations, the new algorithm can perform better or as well as compared EDAs.