在统计推断中,稳健性是指实际问题的数据来源与我们的模型假定有偏离时,所采用算法的结果受到的扰动很小,并且保持算法的预测性能.本文将统计稳健性的研究方法引入机器学习中,分析得到近邻估计这种局部学习能够在大样本的情形下收敛到Bayes最优估计,同时收敛条件可说明近邻估计是稳健估计.在模拟数据和真实数据库上进行实验,结果表明在某些离群点影响模型的情况下,仍保持监督学习预测的泛化性能.
Robustness in statistical inference means that the departure of real data from an assumed sample distribution has little influence on the results of the remarkable prediction performance of the algorithm. The research methods of statistical robustness are introduced into machine learning in this paper. The nearest neighbor estimation algorithm, a kind of local learning, can converge to Bayes optimal estimation in the case of large number of samples, and meanwhile the nearest neighbor estimation algorithm is a kind of robust algorithm under the convergent condition. Finally, experimental results on synthetic and real datasets demonstrate that the generalization performance of the nearest neighbor estimation algorithm can be guaranteed when the model is affected by some outliers.