为了使机器人准确使用全局地图、解决被绑架问题,设计了基于视觉内容匹配的自主定位系统,充分利用每个房间、每段走廊中的物体与布局信息进行机器人导航定位。视觉内容匹配包括提取图像重叠区、重叠区域的子块分解重建和相似度匹配。首先将待匹配图像中由机器人视觉和位移造成的畸变调整为一致、然后分析图像内容相似度,并设计了天花板特征点筛选和误匹配子块剔除算法,实现重叠区的准确提取与重建。实验中,机器人视觉可与关键帧序列构建的全局地图准确匹配,实时提取与之最相似的关键帧对机器人定位,匹配准确率不低于95%,定位精度RMSE〈0.5 m,被绑架时仍能准确自主定位。
In order to take advantage of global navigation map for robot self-localization and solve kidnap problem, a robot vision localization system is presented based on graphic content matching. It can make good use of the different objects and their layout in different rooms or corridors to fix robot position, which cannot be disturbed by similar objects. This vision localization system is composed of frames overlap region extraction and overlap region rebuilding through sub-blocks matching, and the interference caused by points on the wall and mismatching sub-blocks can be deleted. The image distortion can be adjusted to the same before matching. In the experiment, this graphic matching method can match the real-time robot vision with keyframes global map effectively, and find out the most similar keyframe for each vision image and fix robot position exactly. More than 95% robot vision can be matched and position RMSE0.5 m. Robot can also localize itself effectively when it is kidnapped.