针对L1范数多核学习方法产生核权重的稀疏解时可能会导致有用信息的丢失和泛化性能退化、Lp范数多核学习方法产生核权重的非稀疏解时会产生很多冗余信息并对噪声敏感,提出了一种通用稀疏多核学习方法。该算法是基于L1范数和Lp范数(p〉1)混合的网状正则化多核学习方法,不仅能灵活地调整稀疏性,而且鼓励核权重的组效应,L1范数和Lp范数多核学习方法可以认为是该方法的特例。该方法引进的混合约束为非线性约束,对此约束采用二阶泰勒展开式近似,并使用半无限规划来求解该优化问题。实验结果表明,改进后的方法在动态调整稀疏性的前提下能获得较好的分类性能,同时也支持组效应,从而验证了改进后的方法是有效可行的。
Considering that the Ll-norm multiple kernel learning(MKL) method may lead to discard useful informations and yield degenerated generalization performance when it produces sparse solution of the kernel weights, and Lp-norm (p 〉 1 ) mul- tiple kernel learning (MKL) method may result in numerous redundant information and is sensitive to noise when the method produces the kernel weight with non-sparse solution. This paper proposed a method called generalized sparse MKL (GSMKL) method by introducing an elastic-net-type constraint on the kernel weights. More specifically, it was an MKL method with a constraint on the combination of the Ll-norm and Lp-norm (p 〉 1 ) on the kernel weights, which could not only adjust the sparseness flexibly but also encourage the grouping effect on the solution. And it believed that both L~-norm MKL and Lp-norm MKL could be regarded as special cases. The mixed constraint in the method was non-linear constraint, and the method uti- lized second-order Taylor expansions to approximate the mixed constraint. Besides, it employed the semi-infinite program (SIP) to solve the optimization problem. Experimental results show that the improved algorithm, under the condition for the existence of dynamic adjustment sparseness, can not only achieve good classification performance, but also facilitate the grouping effect, so the improved algorithm is efficacious and feasible.