位置:成果数据库 > 期刊 > 期刊详情页
基于数据分割和集成学习的大规模SVM分类算法
  • 期刊名称:系统工程
  • 时间:0
  • 页码:84-88
  • 语言:中文
  • 分类:TP391[自动化与计算机技术—计算机应用技术;自动化与计算机技术—计算机科学与技术]
  • 作者机构:[1]华南理工大学工商管理学院,广东广州510640
  • 相关基金:基金项目:国家杰出青年科学基金资助项目(70825005);国家自然科学基金资助项目(70801027);教育部人文社会科学基金资助项目(OTJC630059)
  • 相关项目:多阶段多供货方在线租赁风险回报模型及优化研究
中文摘要:

支持向量机对分类问题的求解过程相当于解一个线性约束的二次规划问题,求解的变量个数与训练样本数相等,且需要计算和存储的核矩阵大小与训练样本数的平方相关。随着样本数目的增多,经典的求解二次规划问题的算法不再适用。针对大规模二分类问题,基于数据分割和集成学习策略,本文提出了一种快速支持向量机学习算法。其主要思想是:首先对数据集进行预处理,自动将正负类分别聚成若干子簇;然后对两两组合的正负子簇用SMO算法进行交叉学习,得到多个基本分类器;最后对这些基本分类器进行集成学习。在UCI的5个数据集上的实验表明,与SMO学习算法相比,这种基于数据分割的训练策略在精度几乎没有损失的情况下显著地提高了训练速度。

英文摘要:

The training problem of SVM for classification is equivalent to solving a linearly constrained quadratic programming with a number of variables equal to the one of the training samples. The size of kernel matrix to be computed and saved is related to the square of the number of the training samples. With the number of the training samples increasing, the conventional algorithms for solving quadratic programming problems can not be used. At present,the design and analysis of the training algorithm is a hot topic for large scale classification problems in the field of SVM. Based on the data partition and ensemble learning, a fast training algorithm is presented for large scale classification problems in this paper. The main idea is stated as follows. Firstly, the original dataset is preprocessed by using k--means clustering and the data points in the positive class and the negative one is divided into several clusters respectively. Then the two small clusters coming from the positive clusters and the negative ones respectively make up of a binary classification problem which is solved by SMO, and the basic classifiers can be obtained. Finally, these basic classifiers are integrated by the ensemble learning. Experiments have been conducted on five data sets in UCI. The results show that the proposed algorithm is faster than SMO without loss of accuracy.

同期刊论文项目
期刊论文 90 著作 2
同项目期刊论文