Co-Training算法要求两个特征视图满足一致性和独立性,但是,许多应用中不存在自然划分且满足这种假设的两个视图.为此,提出利用互信息(MI)或者CHI统计量评估特征之间的相互独立性,建立特征相互独立性模型(MID-Model).基于该模型,提出了新的特征子集划分方法PMID—MI与PMID-CHI算法,能有效地将一个特征集合划分成两个独立性较强的子集.并且利用多种差异评估法,进一步验证两个子集的独立性.基分类器之间的差异性能够减少两个基分类器给同一个未标注文本都标注错误的可能性.最后,提出了对Co-Training的改进算法SC—PMID.实验结果表明SC—PMID算法能够明显提高半监督分类精度.
Co-training algorithm is constrained by the assumption that the features can be split into two subsets which are both compatible and independent. However, the assumption is usually violated to some degree in real-world application. The authors propose two methods to evaluate the mutual independence utilizing conditional mutual information or conditional CHI statistics, and present a method to construct a mutual independence model (MID-Model)for initial features set. Based on MID- Model, two novel feature partition algorithms PMID-MI and PMID-CHI are developed. The former utilizes conditional mutual information to evaluate the mutual independence between two features; the latter utilizes conditional CHI statistics. As a result, a feature set can be divided into two conditional independent subsets using PMID-MI or PMID-CHI. Compared with the random splitting method, both PMID-MI and PMID-CHI accomplish better performance. In addition, the conditional independence between two subsets is verified by several diversity measures such as Q statistic, correlation coefficient ρ, disagreement, double fault, and the integrative measure DM. Then, combining MID-Model and diversity measures, an improved semi-supervised categorization algorithm named SC-PMID is developed. Two classifiers can be co-trained on a pair of independent subsets. The independence of two subsets can reduce the chance of both classifiers agreeing on erroneous label of an unlabeled example. Experimental results show that the SC-PMID algorithm can significantly improve the semi-supervised categorization precision.