利用L1范数放宽函数集的VC维上界.构造出基干L1范数的结构风险.在隐空间中,利用这一结构风险,提出了稀疏隐空间支持向量机.由于L1范数具有诱导稀疏性的本质,使得稀疏隐空间支持向量机获得了良好的稀疏性.同隐空间支持向量机一样.稀疏隐空间支持向量机对核函数没有Mercer条件的限制.扩大了核函数的选择范围.在人工和基准数据集上的分类和回归实验表明.所提算法具有同支持向量机相当的推广能力,并且其稀疏性优于支持向量机,从而使得函数评价速度大大加快.
In this paper, L1 norm is employed to loose the bound of the VC dimension, and thus a new structure risk based on the L1 norm is developed. Utilizing this structure risk in the hidden space, we propose a sparse hidden space support vector machine (SHSSVM). Attributing to the merit of the L1 norm, a good sparsity is achieved by the SHSSVM, Like the hidden space support vector machine (HSSVM), the kernel functions used in the SHSSVM are not required to satisfy the Mercer condition. so they can be chosen from a wide range, Simulations on artificial and benchmark data sets for regression and classification prove that the SHSSVM has as good generalization performance as the support vector machine (SVM). and better than the HSSVM. Furthermore, the SHSSVM obtains a sparser decision function than SVM and HSSVM. thus increasing the speed of function evaluation.