提出了一种新颖的基于人脸朝向的非穿戴人机交互新方法.采用主动形状模型确定人脸面部轮廓特征点,克服光照和人脸姿态变化影响,在简化冗余信息的同时,降低计算复杂度.基于最大三角化划分面部轮廓特征,在此基础上,基于Kinect摄像机将构建的二维人脸模型映射至三维:空间,动态求解面部朝向的法向量,通过图像帧问的均值滤波确定空间交互目标.用户无需佩带任何标记,且其活动不受约束,满足人机交互舒适性、多元性要求,实现非穿戴自然人机交互.通过实验对比,验证了本文方法有效、可行.
A novel non-wearable human-computer interaction (HCI) method has been proposed based on facial orientation. An active shape model is employed to locate some feature points in the facial contour which can be able to overcome the effects of illumination and facial posture variation. Some redundant information can be simplified. Moreover, HCI can be made in a real time. After a 2D facial geometric model being built based on the maximal triangulation of facial contour features, a calibrated Kinect sensor is used to map the geometric model into 3D coordinate. In order to determine the interacting target, the normal direction of facial orientation is calculated dynamically by a temporal median filter.Non-wearable and natural HCI modal can be realized in which the user can move freely without wearing any equipment or auxiliary markers. Besides, the need of comfort and diversity can be met simultaneously during HCI. Experiment results indicate that the developed approach is efficient and can be used to natural non-wearable HCI.