针对现有的虚拟说话人面部表情比较单一,表情和动作不能很好地协同的问题,提出了一种建立具有真实感的情绪化虚拟人的方法。该方法首先利用三参数产生,保持和消退来对动态面部表情进行仿真,采用融合变形技术合成融合多元素复杂的表情,然后以人类心理学的统计数据为依据来对眼部和头部动作进行设计,使虚拟人看起来更加逼真,最后分析了外在条件相机位置、光照对增加虚拟人真实感的影响。实验结果表明,该方法建立的虚拟人不仅逼真自然且富有感情,而且语音、动态面部表情、眼动和头动达到了很好的协调同步。
Aiming at the problem of facial expression is relatively simple, the expressions and the actions can not be well coordinated on the existing talking avatar,this paper proposed a new method to build realistic and emotional avatar. Firstly,it used three parameters onset, apex and offset to simulate the dynamic facial expression and using blend silape function to synthesise complex expression, and then used tile statistical data from the human psychology to design the eye and head movement to make the avatar more realistic, Lastly, it analyzed the influence of the external conditions, such as camera position and lighting to increase the realism of the avatar. Experimental results indicate the avatar looks not only realislic and lull of emotion, but also the voice,dynamic facial expression,eye and head movement have reached a vel7 good coordination and synchronization.