为了获得更好的决策树集成效果,在理论分析的基础上从数据的角度提出了一种基于代表性数据的决策树集成方法。该方法使用围绕中心点的划分(PAM)算法从原始训练集中提取出代表性训练集,由该代表性训练集来训练出多个决策树分类器,并由此建立决策树集成模型。该方法能选取尽可能少的代表性数据来训练出尽可能好的决策树集成模型:实验结果表明,该方法使用更少的代表性数据能获得比Bagging和Boosting还要高的决策树集成精度。
To generate better ensemble output of decision trees, based on the theoretic analysis, this paper put forward a method used for ensemble of decision trees with representative data from the data point of view. This method extracted repre- sentative data via partition around medoids (PAM) algorithm from the original training set at first, then it trained a number of decision trees with the help of the representative data and built a ensemble mbdel with the trained decision trees. This method could select the less representative data and trained the better ensemble model of decision trees. The experiment resuhs show that this method can obtain higher ensemble precision of decision trees than Bagging or Boosting furthermore it uses less repre- sentative training set.