当前的人体运动无损压缩方法多是将人体姿态放入一个前后连续的预测空间, 使得当需要某一姿态时必须将其前预测空间的姿态全部处理完成, 增加了解压时间和内存占用. 针对这一问题, 提出一种预测器级可随机存取预测的人体姿态数据无损压缩方法. 该方法将组织良好的人体姿态集作为处理对象, 首先采用两步的聚类方法分层对人体动作及姿态进行归类整理, 整理后将相似的人体姿态聚集到一个数据预测空间; 然后提出一个带参的均值预测器对聚集姿态集中的当前姿态进行预测; 最后采用熵编码算法对预测值和真实值之间差值进行压缩编码, 得到压缩后的精简数据. 实验结果表明, 文中方法在解压缩时间及压缩比方面优于传统的方法; 在人体动画, 虚拟现实等需要实时获取精确运动数据的应用中具有广泛应用前景.
In current lossless compression methods, poses are usually encoded in a highly correlated space. Before current pose is decompressed, all poses related with it have to be processed. This will cost more de-compression time and memory. In this paper, we propose a lossless pose compression method based on ran-dom access predictor. In our method, we compress well-organized poses. Firstly, the motion database is pre-processed by using a two step clustering process. After this process, similar poses are put together into one specific prediction space, ready for predicting and encoding. Secondly, a specially designed average predic-tor with quantized parameters is proposed to predict each pose independent of other poses. Finally, entropy encoding is introduced to compress the difference between the predicted value and the real value. Compared with previous lossless compression methods, we achieve higher compression ratio and better decompression time. The proposed lossless compression method can be widely used in character animation and virtual real-ity, where users normally demand high quality motion in real-time.