为了使用户看到自身穿着模特衣服时的图像效果, 提出一种基于模特和用户照片的虚拟试衣方法, 其中包括肤色替换、头部替换和体形调节3 个步骤. 首先构造一个模特肤色和用户肤色之间的映射关系, 实现肤色替换; 然后以主动形状模型法检测出人脸的粗略边界, 并采用基于图像灰度梯度的动态规划来获取更精细的人脸边界, 再通过训练出来的发色统计模型, 以像素扩散的方式获取头发区域, 从而提取完整头部; 最后基于Kinect 获得的模特姿势将3D 人体模型上的特征环投影到模特照片上形成体形控制点, 通过这些控制点调整照片中的模特体形, 使之与用户一致. 大量实验结果证明, 该方法是可行的.
In order to let user watch the image about what he is wearing, a method for virtual try-on is designed using photos from the general model and from the ordinary user. The method includes three steps, the skin color substitution, the head substitution and the body shape varying. At first a mapping between the average skin colors of photos from the general model and the user is constructed. The color of pixels in the skin area of model photo is changed by the mapping. Then rough face boundary is determined by ASM, and it is further optimized to an accurate face boundary using the gray gradient. The hair area of the head is determined by diffusing from the ini-tial hair pixels according to the statistical model of hair color. Finally, based on the model's pose ascertained by Kinect, the control points in the model's photo are obtained automatically. Using these control points the model's body can be changed to be consistent with the user's body. Experiments illustrate that our method is practical.