针对现有动态背景下目标检测算法的局限性,提出一种基于多组单应约束和马尔可夫随机场的运动目标检测算法.该算法以视频序列多帧跟踪的运动轨迹为基础,通过轨迹分离和像素标记2个阶段实现运动目标的检测:在轨迹分离阶段,利用多组单应约束对视频序列的背景运动进行建模,并基于该约束通过累积确认的策略实现背景轨迹和前景轨迹的准确分离;在像素标记阶段,以超像素为节点建立时空马尔可夫随机场模型,将轨迹分离信息以及超像素的时空邻域关系统一建模在马尔可夫随机场的能量函数中,并通过最小化能量函数得到每个像素的前背景标记结果.与现有基于运动轨迹的方法相比,文中算法不需要仿射摄像机模型的假设,有效地解决了运动轨迹等长带来的轨迹点区域缺失问题,并可同时处理静态背景和动态背景2种类型的视频;在多个公开数据集的测试结果表明,该算法在轨迹分离准确性、轨迹点密度以及像素标记准确率等方面均优于现有方法.
To overcome the limitation of existing methods for segmenting moving object from the dynamic back?ground, a moving object detection algorithm using multi-frame homography constraint and Markov Random Fields (MRF) model is proposed. The proposed method is based on long term motion trajectory which is tracked across the whole video sequences. And the proposed method consists of two stages: trajectory level separation and pixel level labeling. In trajectory separation stage,the background motion is modeled as multi-frame homo?graphy constraint. Based on this constraint, the motion trajectory can be divided into foreground and background ones using designed cumulative acknowledgment strategy. In pixel labeling stage,per frame superpixels are ac?quired by over-segmenting method. Taken the superpixel set as node, the Markov Random Fields model is intro?duced to convert the sparse trajectory classification to dense pixel labeling. An energy function of Markov Ran?dom Fields detection model is designed which has combined motion trajectory separation cue and spatio-temporal constraints of each superpixel. The labeling of each superpixel can be obtained by finding the minimum of the energy function. Compared with traditional methods using long term motion trajectory, the proposed method doesn’t need the assumption of affine camera model. And the problem caused by equal length constraint of mo_tion trajectory is also avoided. Meanwhile, the proposed method can effectively handle videos from stationary camera and moving camera. Extensive experiments are conducted on publicly available videos from various sources. And the qualitative and quantitative results show that the proposed algorithm outperforms previous algo?rithms significantly in trajectory level separation accuracy, trajectory point density and pixel level labeling accuracy.