针对多区段交通场景视频监控既有处理方法无法获得交界区段的视频,也无法直接得到任意区段的视频描述情况,提出一种基于Sprite背景,由各区段的采集视频提取的车辆对象导出任意区段视频的视频压缩新方法。由采集视频生成区段Sprite背景图,并预先压缩保存;逐区段用背景差法提取相应区域的车辆对象及其最小覆盖矩形定位点,并基于对象运动分析得出同一对象的等价复用关联;在统一的坐标系下,通过一个滑动窗口选择背景Sprite并计算相应车辆对象位置,即可稳健地重构任意区域的视频。其特点是压缩比高、视频真实感强。实验表明,等价复用对象的PSNR差异约3.83dB左右,其VOP内相对实际对应帧的纹理MAD可达30.37dB,甚至更高,视觉效果较好,主观评分达4分以上,证明了视频压缩新方法的有效性。
The current existing processing techniques in traffic surveillance applications for multi-sectional scenes are neither able to generate the video of the intersectional region, nor to derive directly the video for an arbitrarily specified section within these regions. A novel object-based approach is proposed, which is able to reconstruct the video of arbitrary section from the captured video clips for each fixed section. The background Sprite is generated from the captured video, compressed and stored in advance. Then the vehicular objects and the localized points of their minimum rectangular coverage are extracted using the background subtraction, and their equivalent reuse links are derived based on the motion analysis. Under a unified frame system, the video of arbitrary section can be robustly reconstructed by sliding a window to choose the related region Sprite and evaluate the position of the objects. With this approach, the compression ratio is higher and the reconstructed video is photo-realistic. Experimental results show that the PSNR difference of the reused objects is only about 3.83 dB and the textural MAD within the VOP with respect to its original frame is 30.37 dB or higher, having satisfactory visual quality with its subjective score 4 or above, thus substantiated the effectiveness of this approach.