以触觉相关的多通道交互研究为立足点,结合经典的多通道假设和最新的认知理论,提出一种融合触觉、听觉、视觉的多通道信息认知加工模型,并就计算机端的信息处理过程提出了多通道交互的分层处理模型,分析了相应的多通道整合方法.该模型对交互界面与程序主体功能定义不同的实现路径,区分交互设备和交互信息处理过程,有利于从不同角度对交互界面的研究工作进行简化,避免高耦合度带来的冗余工作量;实现了一个融合触、听、视觉交互的实例.实例结果表明,利用文中模型能够降低多通道交互研究的分析难度并提高实验效率.
We propose a cognition model on how the information of touch, audition and vision is integrated in human mind, and a layered processing model on how the computers process the multimodal information. The proposed cognition model is based on the classic cognition model of multimedia learning, and the ideas of the latest cognition theories. The proposed layered processing model separates the processing route of the interface from the main function in the application, as well as the interactive device from the information processing. The separation avoids the redundant work caused by high coupling, and simplifies the work of interface study from different perspectives. The simplification helps realizing a simple version of a multimodal interaction sample integrating touch, audition and vision. The experimental results show that the model proposed could improve the performance in interaction studies.