针对视频复杂运动事件进行了分析,对用于自然语言处理的格语法理论进行了扩展,增加时间格结构,并采用扩展后的格框架对复杂事件进行了标注,并将数据存储在数据立方体中,最后采用MDFP-growth方法对多维概念格进行关联规则的挖掘。在实验部分,对比了扩展后的格框架与扩展前原方法对复杂事件标注的结果,并比较扩展了时间格后的四维数据(PRED,Ag,T,Loc)和扩展前三维数据(PRED,Ag,Loc)挖掘其关联规则中系统运行时间,产生规则数对比结果及视频检测的准确率和召回率对比。实验证明,提出的方法能更加准确地标注出复杂事件,并具有更高的处理效率。
This paper analyzed complex motion events of the video, extended the case grammar theory of natural language processing, and increased the time case structure. Then it used the extension method to annotate video complex events, and stored data in the data cube. Finally, it used the MDFP-growth method to mine association rule for multi-dimensional case concepts. In the experimental section, this paper compared the results of the extended case framework and the original method of annotating a complex event, and compared the four-dimensional data extended the time case to the three-dimensional data mining association rules in the system. Finally, numbers of rules between the two and the precision and recall rate of video detection had been compared. The experiments show that the proposed method can be more accurately annotated a complex event and the higher processing efficiency.