利用运动捕获数据,通过学习获得虚拟人运动的统计模型,从而创建真实、可控的虚拟人运动.提出了一种方法:通过对原始运动数据聚类,提取出局部动态运动特征——动态纹理,并用线性动态系统描述,有选择地注释有明确含义的线性动态系统,构建注释动态纹理图.利用这一统计模型,可生成真实感强、可控的虚拟人运动.结果表明,这种方法在交互环境中能够生成流畅、自然的人体运动.
In this paper presents an approach for creating lifelike, controllable motion in interactive virtual environments. This is achieved through learning statistical model from a set of motion capture sequences. The method is based on clustering the motion data into motion primitives that capture local dynamical characteristics-- Dynamic texture, modeling the dynamics in each cluster using linear dynamic system (LDS), annotating those LDS' which have clear meaning, and calculating the cross-entropy between frames of LDS' to construct a directed graph called a annotated dynamic texture graph (ADTG) which has two-level structure. The lower level retains the detail and nuance of the live motion, while the higher level generalizes the motion and encapsulates the connections among LDS'. The results show that this framework can generate smooth, natural-looking motion in interactive environments.