本文介绍了一种新的智能内窥镜导航系统。该系统用于实现内窥镜的智能导航介入。其中导航方法是基于视觉传感器和触觉传感器的多信息融合的方法。在视觉传感器中采用了一种基于自适应阈值图像分割算法的寻径方法,该方法采用加速的最大类间方差法进行阈值分割。在触觉传感器中采用了基于MEMS技术的硅微触觉传感器,四个硅微触觉传感器被封装在内窥镜头部表面,可以获得内窥镜头部全方向的压力。本文通过实验验证了该导航系统的可行性及可靠性,该方法的应用弥补了传统方法中因暗区不明显而无法引导内窥镜的不足,提高了内窥镜导航的精度和可靠性。
In this paper, a new navigation method for intelligent colonoscope robot is presented. The proposed navigation method is developed to facilitate colonoscope into a human colon. The navigation method is based on the sensory fusion utilizing the quantitative parameters from the captured images and the tactile sensors. In vision navigation method, dark region which is described as the direction of human colon can be segmented from colonoscope image after auto-thresholding algorithm. Auto-thresholding algorithm is based on Ostu' s method and the method is optimized by analyzing the endoscope image. The tactile sensors used here is a bridge circuit of force sensitive resistors built on silicon base by MEMS technology. Four sensors are mounted symmetrically around the colonoscope' s head, so they can measure the pressure force in almost all directions. The proposed navigation method was tested with animal colons and the experimental observations are discussed.