多类分类是机器学习领域中的重要问题.目前普遍采用的多类分类方法:“oneversus all”(OvA)直接利用“标准”的两类分类器重复构造两类分类器,导致计算复杂度较高、分类效率降低.基于支持向量机的多类分类器尽管无需重复构造两类分类器,但由于它对应于版本空间(version space)内最大超球的中心,所以当版本空间为非对称或比较狭长时,它的泛化能力显著降低.而基于版本空间解析中心的多类分类算法M—ACM克服了上述问题.从理论上分析了该分类器的泛化性能,给出了它的泛化误差上界,并进行了实验验证.
Analytical center machine, based on the analytical center of version space, outperforms support vector machine, especially when the version space is elongated or asymmetric. While analytical center machine for binary classification is well understood, little is known about corresponding multi-class classification. Multi-class classification is a significant challenge theoretically and practically in the field of machine learning. The current multi-class classification method, one versus all, needs constructing classifiers repeatedly to separate a single class from all the others, which leads to daunting computation and low efficiency of classification. Though multi-class support vector machine corresponds to a simple quadratic optimization, it is not very effective when the version space is asymmetric or elongated. Thus, the multi-class classification approach based on the analytical center of version space, which corresponds to a simple quadratic constrained linear optimization, is proposed to address the above problems. At the same time, in order to validate its generalization performance theoretically, its generalization error upper bound is formulated and proved. Experiments on wine recognition and glass identification dataset show that the multi-class classification approach based on the analytical center of version space outperforms the multi-class support vector machine in generalization error.