通过样本学习得到的目标先验视觉信息可以对目标进行高效表示,在目标跟踪中通过充分利用这些先验知识提高跟踪精度。基于此,提出一种利用离线训练结果进行在线跟踪的算法,首先利用深度神经网络通过样本学习目标的视觉先验,然后跟踪在贝叶斯推理框架下进行,在跟踪过程中将目标视觉先验用作目标的外观表示,跟踪结果由粒子滤波顺序得到。为了防止跟踪漂移,通过特征点匹配建立系统的状态模型,并且将目标分解成子目标进行相似度量,提高算法抗局部遮挡能力。在多个公开测试集上实验表明,该算法可以提高目标跟踪精度,防止跟踪漂移,实现长序列可靠跟踪。
Visual prior which learns from generic real-world images can precisely represent objects, and the tracking results can be improved by exploiting the learned visual prior. Motivated by this, we propose an algorithm that transfers visual prior offline for online object tracking. First, we train a deep neural network to learn generic image features. Then, tracking is carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. To avoid tracking drift, the state model is estimated by matched scale-invariant feature transform(SIFT) feature correspondences, and the object is divided into some sub-objects for tracking to enhance the robust while partial occlusions occur. The experiment results on some benchmark video sequences show that the proposed algorithm can improve the tracking precision, and prevent track failure to achieve robust long sequences tracking.