在支持向量机(Support vector machine,SVM)中,对核函数的定义非常重要,不同的核会产生不同的分类结果.如何充分利用多个不同核函数的特点,来共同提高SVM学习的效果,已成为一个研究热点.于是,多核学习(Multiple kerne learning,MKL)方法应运而生.最近,有的学者提出了一种简单有效的稀疏MKL算法,即GMKL(Generalized MKL)算法,它结合了L1范式和L2范式的优点,形成了一个对核权重的弹性限定.然而,GMKL算法也并没有考虑到如何在充分利用已经选用的核函数中的共有信息.另一方面,MultiK-MHKS算法则考虑了利用典型关联分析(Canonical correlation analysis,CCA)来获取核函数之间的共有信息,但是却没有考虑到核函数的筛选问题.本文模型则基于这两种算法进行了一定程度的改进,我们称我们的算法为改进的显性多核支持向量机(Improved domain multiple kernel support vector machine IDMK-SVM).我们证明了本文的模型保持了GMKL的特性,并且证明了算法的收敛性.最后通过模拟实验,本文证明了本文的多核学习方法相比于传统的多核学习方法有一定的精确性优势.
In support vector machine (SVM), it is critical to define the kernel function and a different kernel would cause different classification accuracy. People have started pursuing how to make the most use of multiple kernels harmoniously to improve the SVM performance, hence, the multiple kernel learning (MKL). Recently, an efficient generalized multiple kernel learning (GMKL) method was presented, which combines the advantages of Ll-norm and L2-norm. However, the GMKL algorithm does not make the most use of the common information among the selected kernels. On the other hand, the MultiK-MHKS algorithm uses the canonical correlation analysis (CCA) to get the common information among the kernels while ignoring the selecting of kernels. So this paper tries to combine them and an improved domain multiple kernel support vector machine (IDMK-SVM) is presented. Simulation experiments demonstrate that the IDMK-SVM gets a higher classification precision than the existing typical MKL algorithms.