从学习算法自身的角度研究算法的泛化能力,给出了一个新的学习算法稳定性框架.在该框架下,我们通过独立随机变量函数和的尾概率估计集中不等式,获得了一个更加严格的学习算法泛化误差指数界估计.
This paper researches on the generalization ability of algorithms from the perspective of learning algorithm providing a new framework of stability in learning algorithm. In this framework, a more rigid estimation of generalization error is got through a concentration inequality which can be used to derive exponential inequalities for functions of independent random variables.