求解极小不可满足保持子术语集(Mups)是不一致术语集调试的核心工作.在构建术语集依赖关系图模型基础上,从概念之间的依赖关系角度出发,定义语义依赖度、语义簇、依赖度分布3个指标反映本体术语集的复杂程度;通过讨论不可满足概念数目、冲突公理集基数和冲突公理基数对Mups问题求解难易的影响,定义冲突公理集最大基数和冲突公理最大基数两个指标反映不一致本体术语集的数据复杂程度;基于这些复杂性指标,设计针对Mups问题的不一致本体数据标准测试集(Mups Benchmark,MupsBen)来评测Pellet、Hermit、FaCT++、JFact和TrOWL这5种推理机在黑盒算法下求解Mups的性能.评测实验显示,所定义的复杂度指标能够有效反映Mups求解问题的数据复杂程度.对于特定推理机,其性能随测试数据的结构复杂程度的增大而降低;对于不同推理机,由于其内在推理机制与优化策略的差别,在不同复杂度指标下表现出不同的性能差异.
Calculating minimal unsatisfiability crucial role in debugging incoherent terminology. model and from the perspective of dependences preserving sub-terminologies (Mups) plays a On the basis of terminology dependency graph between concepts, we define three metrics of semantic dependence, semantic cluster and dependence distribution to reflect the complexity of terminology. By studying unsatisfiable concept number, conflict axiom set cardinal and conflict axiom cardinal how to influence the difficulty of calculating Mups, we define two metrics of max cardinal of conflict axiom set and max cardinal of conflict axiom to reflect the complexity of incoherent terminology. Then, we devise a Mups Benchmark (MupsBen) to evaluate the performances of reasoners for calculating Mups based on those metrics. MupsBen is able to generate incoherent terminologies to evaluate the performances of Pellet, Hermit, FACT++, JFact and TrOWL by black-box method. Evaluation experiments show that the proposed metrics can effectively reflect the complexity of Benchmark data for calculating Mups. The performance of specific reasoner decreases with the increase of complexity of test data. Because of the differ- ences of inference mechanisms and optimization strategies, the performances of various reasoners have quite differences for diverse complexity metrics.