单一传感器无法解决户外增强现实系统中的跟踪定位问题.为了提高视觉跟踪定位算法的精度和鲁棒性,提出一种基于惯性跟踪器与视觉测量相结合的混合跟踪定位算法.该算法在扩展卡尔曼滤波框架下,通过融合来自视觉与惯性传感器的信息进行摄像机运动轨迹估计,并利用视觉测量信息对惯性传感器的零点偏差进行实时校正;同时采用SCAAT方法解决惯性传感器与视觉测量间的时间采样不同步问题.实验结果表明,该算法能够有效地提高运动估计的精度和稳定性.
Till now there is not a sensor that can provide a complete solution for tracking in outdoor augmented reality system. To improve the robustness and accuracy of real-time visual tracking, we present a sensor fusion algorithm by combining inertial sensors and a CMOS camera, making it suitable for unknown environment. The fusion algorithm makes use of an extended Kalman filtering to fuse inertial and vision data to estimate the trajectory of the camera. Meanwhile, the inherent error drift problem of the inertial sensor is addressed by using the vision information. The method of single- constraint-at-a-time (SCAAT) is also introduced to assimilate the sequential observations. Experimental results show that the proper use of additional inertial information can effectively enhance the robustness and accuracy of visual tracking.