分析了视觉手势的交互特征,提出了非接触型设备交互模型,基于数据流图方法建立了支持连续信息输入的数据流模型来描述视觉手势信息的处理流程,基于“paint—view—correct”隐喻构建了手势界面开发工具IEToolkit.系统的主要特点包括:使用插件设计思想提供了可扩展的接口,方便开发人员对系统进行扩展;实现了对平台内部多个分类器的统一管理,方便用户对这些分类器进行动态配置;提供了可视化的用户界面,用户能够根据不同的应用灵活地定义高层的手势交互语义;屏蔽了图像处理、机器学习等底层的技术细节,降低了界面开发难度.最后介绍了基于IEToolkit的软件系统开发方法与应用实例.实例表明,该界面工具能够为基于视觉手势的互动游戏的设计开发提供统一的平台和有效的解决方案.
In this paper, a toolkit is created for designing vision-based gesture interactions. First, an abstract model for non-contact devices is proposed. Then, based on a data flow diagram method and an interactive learning approach, the IEToolkit is presented. It is designed based on the attributes of vision interaction and shields the underlying details of the computer vision algorithms. It has the following characteristics: a scalable interface to facilitate developers to add new classifiers, a unified management mechanism that provides dynamic configuration for all of the classifiers, and a visual user interface that supports the definition of a high-level semantic gesture. Finally, several prototypes are given. Experimental results show that the IEToolkit can provide a unified platform and a general solution for vision-based hand gesture games.