基于留一准则的正交前向选择算法(Orthogonal Forward Selection based on Leave-One-Out Criteria,OFS-LOO)是最近提出的一种数据建模方法,它能够产生鲁棒性好的参数可调的核函数回归模型。OFS-LOO采用贪婪算法策略,利用全局优化算法逐项调节每个回归项的参数,逐步地增加模型的项数,减少留一准则函数值。但是OFS-LOO仅保留当前最优解作为新回归项的参数,而忽略当前的选择对以后步骤的影响,破坏了模型的稀疏性。本文在OFS-LOO的框架下提出了一种新颖的树型算法。在选择核函数模型的每一项时,采用重复加权增进搜索(Repeated Weighted Boosting Search,RWBS)算法,同时保留RWBS得到的多个局部极值作为核函数参数的候选项。新方法试图找到传统OFS-LOO和全局最优解之间的折衷。实验表明,与传统方法相比,新方法得到的核函数模型稀疏性更好,泛化能力更强。
Orthogonal Forward Selection based on Leave-One-Out Criteria(OFS-LOO) is recently proposed as an excellent tool for data modeling,which is capable of producing robust kernel model with tunable parameters.OFS-LOO adapts greedy scheme,which utilizes some global search algorithm to tune the kernel model term by term by minimizing LOO criteria.However,it is well known that the greedy algorithm only seeks the best performance in the current stage,and ignores its effect on the next stage.Nevertheless the selection of a particular regressor will surely have significant impact on the tuning of the regressor in the next stage.In this paper,a novel tree structure search is incorporated into the framework of OFS-LOO.The new method adopts repeated weighted boosting search (RWBS) algorithm.At each regressor,multiple optima are kept as the candidates of the parameters of the new regressor rather than only the best one is retained as the OFS-LOO does.This enhanced OFS-LOO provides a good compromise between an exhaustive search on all basis function parameters and a non-optimal a priori choice.The numerical results show that,compared to the traditional methods, the new approach can produce the kernel models with much more sparsity and better generality.