L1跟踪表示模型的稀疏性约束,使其对局部遮挡具有良好的鲁棒性,但同时也造成了跟踪速度慢的问题。针对此问题,该文提出使用编码迁移方法进行视觉跟踪。该方法利用低分辨率字典计算候选目标表示系数,并使用高分辨率字典构造观测似然,有效地减小了跟踪过程中的计算量。为了提高编码迁移的精度和字典适应背景干扰的能力,提出一种在线鲁棒判别式联合字典学习模型用于字典更新。实验结果表明所提方法具有良好的鲁棒性和较快的跟踪速度。
The sparsity constraint of the L1 tracker's representation model makes it have good robustness towards partial occlusion. However, the tracking speed of the L1 tracker is slow. To solve this study, this paper proposes a coding transfer method for visual tracking. By making use of the low-resolution dictionary to calculate coefficients of the candidate targets and the high-resolution dictionary to construct the observation likelihood model, the method reduces calculation amount effectively in the process of tracking. In order to improve the precision of coding transfer and the ability of the dictionary to overcome the background clutters, this study proposes an online robust discrimination joint dictionary learning model to update the dictionaries. The experimental results demonstrate that the proposed method has good robustness and superior tracking speed.