提出一种基于稳健特征点的立体视觉测程法完成机器人自主高精度定位.从可重复性、精确性和效率3个方面比较多种局部不变特征算法性能,采用稳健特征算法AKAZE(AcceleratedKAZE)提取特征点.提出了一个稳定的特征点匹配框架和改进的随机抽样一致性算法(Random Sample Consensus,RANSAC)去除外点,使文中的视觉测程法可以应用于动态环境中.基于几何约束的分步自运动估计可提供相机运动的精确信息.将提出的方法在KITTI(Kartsruhe Institute of Technology and Toyota Technological Institute)数据集上和复杂校园环境中所采集的立体视觉数据集上进行测试,与经典立体视觉测程方法比较,文中的方法更好地抑制了误差累计,运动估计结果满足实时高精度定位系统需求.
Acquisition of accurate positioning information is a core technology in mobile robots. This paperproposes a stereo visual odometry method based on robust features for robot's autonomous high-precision positioning. First, a robust feature algorithm Accelerated-KAZE (AKAZE) is adopted to extract the inter- est points after comparation with other local invariant feature algorithms in three aspects: repeatability, accuracy and efficiency. Then, we present a robust feature matching strategy and the improved Random Sample Consensus(RANSAC) algorithm to remove the outliers which are mismatched features or on dy- namic objects. Thus the proposed method can be applied to dynamic environment. Geometry constraint based fractional-step ego-motion estimation algorithm provides the accurate motion of camera rig. Last, we test the presented ego-motion scheme on the benchmark datasets of Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) and data captured on campus in a considerably cluttered environ- ment, and compared with state-of-the-art approaches. The proposed approach can restrain the error accu- mulation and satisfy the requirement of real-time and high positioning system.