提出了一种基于3D体感摄像机Kinect设备实现面向手语自动翻译的手势识别方法,利用Kinect体感外设提供的深度图像实现手的检测,并通过OpenNI+NITE框架中函数获得手掌质心在三维空间中的位置,然后通过等值线追踪算法(又称围线追踪算法)和三点定位算法识别出手指实现了手势跟踪,最后通过矢量匹配方法识别手指名字,并设计了三层分类器来实现静态手势语的识别.相较于传统的基于数据手套和单目摄像机的方法,本方法识别的更准确.基于上述方法,实现了一个手势识别系统.实验结果显示,本文提出的方法更加简洁且行之有效.
For a long time, barriers exist in communication to people with hearing disability. To them, being able to effectively communicate with other people has been a dream that seems may never come true. With the invention of computer and technological advances, this dream would be possible in the future. Researchers have always been trying to achieve automated sign language translation by utilizing Human- Computer Interaction (HCI), computer graphics, computer vision and pattern recognition. Most of previous work is based on videos. This paper proposed a gesture recognition method for automated sign language translation based on depth images provided by Kinect, a three-dimensional (3D) motion sensing input device. The 3D coordinates of palm center was computed by using NITE middleware and an open source framework ()penNI. Then fingers were recognized and hand gesture was detected through adopting contour tracing algorithm and three-point alignment algorithm. The names of fingers were recognized using vector fitting method. Three layers of classifiers were designed to achieve static hand gesture recognition successfully. Compared with traditional methods using data gloves and monocular cameras, the present method was more accurate. Our experiments showed that the method was concise and effective.