目前,人体三维重建技术在建模前一般需要采集从多个视角观察人体的数据,并限定人体的姿势,不能很好地用于需要人体模型的交互式应用中.针对这个问题,提出一种基于单个Kinect的渐进式人体重建方法.首先结合特征点检测算法和校错处理,对深度视频帧中人体上的特征点进行定位;其次对身体尺寸进行测算或估算,并在对视频流的处理中渐进式地补充与完善;最后根据以上信息对人体进行分块参数化建模和合成,得到人体模型.实验结果表明,在尺寸不完整的情况下,该方法即可重建包含肢体结构语义的完整人体,易于实现局部模型或姿势的修改,并在虚拟试衣应用中体现出良好的实时性和交互性.
3D Human reconstruction generally begins with acquiring the geometric information captured from multiple perspectives with limited postures only. Therefore, they can't well serve the applications that need easily modified 3D human models. To solve these problems, this paper presents a new reconstruction method which only needs one Kinect. First, it detects the feature points of a human in depth video frames combining with error correction. Second, based on the feature point position, it implements the anthropometric by measuring the size of each body part with progressive calculation algorithm. Finally, it achieves the segmented parametric modeling according to the obtained body part information. The experiments demonstrate that our method can reconstruct an usable 3D human models fast and can modify them conveniently while people change the postures in front of Kinect. Thus, the constructed flexible models help to achieve good efficiency and interactive effects in the virtual dressing application.