人脸特征点跟踪能获取除粗略的人脸位置和运动轨迹以外的人脸部件的精确信息,对计算机视觉研究有重要作用.主动表象模型(Active appearance model,AAM)是描述人脸特征点位置的最有效的方法之一,但是其高维参数空间和梯度下降优化策略使得AAM对初始参数敏感,且易陷入局部极值.因此,基于传统AAM的人脸特征点跟踪方法不能同时较好地解决大姿态、光照和表情的问题.本文在多视角AAM的框架下,提出一种结合随机森林和线性判别分析(Linear discriminate analysis,LDA)的实时姿态估计算法对跟踪的人脸进行姿态预估计和更新,从而有效地解决了视频人脸大姿态变化的问题.提出了一种改进的在线表象模型(Online appearance model,OAM)方法来评估跟踪的准确性,并自适应地通过增量主成分分析(Principle component analysis,PCA)学习来更新AAM的纹理模型,极大地提高了跟踪的稳定性和模型应对光照和表情变化的能力.实验结果表明,本文算法在视频人脸特征点跟踪的准确性、鲁棒性和实时性方面都有良好的性能。
Facial feature tracking obtains precise information of facial components in addition to the coarse face position and moving track,and is important to computer vision.The active appearance model(AAM) is an efficient method to describe the facial features.However,it suffers from the sensitivity to initial parameters and may easily be stuck in local minima due to the gradient-descent optimization,which makes the AAM based tracker unstable in the presence of large pose,illumination and expression changes.In the framework of multi-view AAM,a real time pose estimation algorithm is proposed by combining random forest and linear discriminate analysis(LDA) to estimate and update the head pose during tracking.To improve the robustness to variations in illumination and expression,a modified online appearance model(OAM) is proposed to evaluate the goodness of AAM fitting,then the appearance model of AAM is updated adaptively using the incremental principle component analysis(PCA).The experimental results show that the proposed algorithm has both efficiency and robustness.