Kinect采集的图像数据特征点稀少或缺失时,会导致RGBD-SLAM算法的定位和3D构图发生定位失效和极大构图误差,为此提出一种结合惯性传感器IMU、体感传感器Kinect以及机器人本身运动状态的改进定位算法。通过对姿态先比较再融合,利用IMU的测量数据对位移构建预估模型,利用Kinect位姿估计的结果构建观测模型,将机器人的运动指令和运动限制作为约束条件进行扩展卡尔曼滤波(extended Kalman filter,EKF)融合,改善机器人的定位效果。实验结果表明,该方法能够有效提高机器人的定位精度,改善RGB-D SLAM算法的构图效果。
Localization based on the RGBD-SLAM may fail and generate great mapping error when collected Kineet feature points of image data are rare or absent. To solve the problem, an improved localization algorithm was proposed based on the inenial sen- sor (IMU), the somatosensory sensor (Kinect) and the motion state of the robot itself. The comparison and fusion were done for attitudes, at the same time, the prediction model and the observation model were constructed using IMU measurement data and results of Kinect pose estimation for position respectively, the robot's movement and motion commands restrictions were taken as constraints to do the extended Kalman filter (extended Kalman filter, EKF) integration for robot localization and mapping. Experimental results show that the method can improve the localization accuracy of the robot and the effect of mapping based on the RGBD-SLAM algorithm.