基于Banach空间中的l-范数的有关理论研究回归估计学习问题收敛性的界.利用l-范数,估计了在给定函数集上最小化经验风险的函数的风险与该函数集上的最小风险之差的界,也就是所谓的“样本误差”的界.同时,给出了基于l-范数的样本复杂性的界.
Investigated the bounds about convergence of learning problem of regression estimation which is based on the theory of l-norm in Banach spaces. Used l-norm to estimate the bounds of difference between the risk of the founction which minimize the empirical risk founction and the smallest possible one, i.e. the sample error on a given set of fonuctions. In the same time, a bound of sample complexity estimation was given via the l-norm.