针对快速三维建模中的室内外随动环境感知问题,提出一种基于光学图像的多粒度随动环境感知算法.该算法根据多种光学图像生成拟合真实三维环境的多粒度点云模型,然后通过概率八叉树压缩并统一表示已生成的多粒度三维模型.进而伴随相机轨迹每个时间节点,通过卡尔曼滤波动态融合多粒度点云模型的概率八叉树表示.最终生成唯一的时态融合概率八叉树三维模型,简称TFPOM,使TFPOM能够在较少的噪声影响下以任意粒度动态拟合真实环境.该算法配合剪枝和归并策略能够适应多粒度融合和多粒度表示的环境建模要求,有效压缩环境模型存储空间,实现鲁棒的随动环境感知,便于基于环境模型的视觉导航,增强现实等应用.实验结果表明,该算法能够在以可穿戴设备为代表的内含多种异构光学图像传感器、低计算效能的平台上实时地得到充分拟合真实动态环境的多粒度TFPOM,基于该模型的视觉导航具有较小的轨迹误差.
An optical image based multi-granularity follow-up environment perception algorithm is proposed to address the follow-up environment perception issue from indoor to outdoor in the field of rapid 3D modeling. The algorithm generates multi-granularity 3D point cloud models which perfectly fit the ground-truth according to different types of optical image. A probabilistic octree representation is proposed to uniformly express the 3D point cloud models. Finally, the expected TFPOM is generated through dynamic ground-truth fitting at any granularity, and probabilistic octree representation of multi-granularity point cloud models are dynamically fused through implementation of Kalman filter along with the camera trajectory. Benefiting from pruning and merging strategies, the proposed algorithm meets requirements of multi-granularity fusion and multi-granularity representation. As a result, the storage space of environment models can be effectively compressed and robust follow-up environment perception can be achieved, which are essential in environment model based visual navigation and augmented reality. Experiment results show that the algorithm can generate multi-granularity TFPOM which perfectly fits ground-truth in real time with fewer errors in model based navigation on platforms, such as wearable devices, that are equipped with multiple optical sensors and low computing capability.