RGB-D相机(如微软的Kinect)能够在获取彩色图像的同时得到每个像素的深度信息,在移动机器人三维地图创建方向具有广泛应用。设计了一种利用RGB-D相机进行机器人自定位及创建室内场景三维模型的方法,该方法由RGB-D相机获取周围环境的连续帧信息;提取并匹配连续帧间的SURF特征点,通过特征点的位置变化计算机器人的位姿并结合非线性最小二乘优化算法最小化对应点的双向投影误差;结合关键帧技术及观察中心法将相机观测到的三维点云依据当前位姿投影到全局地图。选择三个不同的场景实验了该方法,并对比了不同特征点下该方法的效果,方法在轨迹长度为5.88 m情况下误差仅为0.023 m,能够准确地创建周围环境的三维模型。
RGB-D cameras (for example, Microsoft Kinect) can capture RGB images along with per-pixel depth information which are widely used for building 3D maps for mobile robots. This paper developed a method for mobile robots to locate and generate 3D models of indoor scenes using RGB-D cameras. Firstly, it captured successive frames information of the environment by the RGB-D cameras. Secondly, it extracted and matched SURF features between successive frames, and computed poses of the robot via positions change between matched features combining with a nonlinear least-squares solver to minimize the reprojection error of corresponding features. Lastly, it projected 3D points cloud observed by the camera into global map according to current camera poses combining with a keyframe technology and view central algorithm. This paper selected three different scenes to verify this approach, and also compared different features. Experimental results show that this approach can reduce the error to 0.023 m with a 5.88 m path and generate 3D model of the environment accurately.