目的 随着稀疏表示方法在图像重建问题中的巨大成功,研究人员提出了一种特殊的分类方法,即基于稀疏表示的分类方法.为了加强样本间的协作表示能力以及减弱稀疏分解时的强L1约束,提出了一种在稀疏分类框架下的迭代剔除机制和贪婪搜索策略的人脸识别方法.方法 将测试样本表示成训练样本线性组合的方式,并在所有训练样本中通过迭代计算来消除对分类影响较小的类别和单个样本,在系数分解的过程中采用最小误差正交匹配追踪(EcOMP)算法,进而选择出贡献程度大的类别样本并进行分类.结果 在迭代更新样本字典的过程中,强化了真实类别的表示能力,并弱化了分解系数的强L1约束.在所有的实验中,正则化参数λ的取值为0.001,在ORL、FERET和AR 3个人脸数据库上,本文算法的识别率可分别达到97.88%、67.95%和94.50%,进而验证了本文算法的有效性.结论 提出的在稀疏分类框架下的迭代剔除机制和贪婪搜索策略的人脸识别方法,在动态迭代的机制中完成了样本字典的更新,平衡了协作表示和稀疏约束的关系,相比较原始的稀疏分类模型有更好的准确性和稳定性.
Objective The success of sparse representation in image reconstruction triggers the research on sparse represen- tation-based pattern classification (SRC). A strict and standard dictionary creates sparse coefficients, but may not achieve a better classification accuracy in the SRC model. Recent research reveals that the collaborative representation (CR) mech- anism, but not the L1 -norm sparsity constraint, improves face recognition (FR) accuracy. Therefore, constructing a ration- al and optimal dictionary for SRC model is a challenging task. We propose a novel SRC fusion method using a dynamical class-elimination mechanism, which strengthens the ability of collaborative representation, and a greedy search (GS) strat- egy for face recognition. Method The proposed method involves two aspects: training variable selection and classification strategy and sparse coding coefficient decomposition. The training variable selection and classification strategy aims to represent a query sample as a linear combination of the most informative training samples and to exploit an optimal represen- tation of the training samples from the classes with major relevant contributions. Instead of eliminating several classes at one time, we eliminate classes one by one with GS sparse coding process until the ideal number of classes is obtained. In the context of the proposed method, an important goal is to select a subset of variables that can accomplish the provision of a descriptive representation for sparse category knowledge structure. We develop a heuristic learning strategy to achieve this goal. The method converts the original classification problem into a simpler one that contains a relatively small number of classes. The remaining final training samples are used for classification and to produce the best representation of the test sample. Literature validates that the CR mechanism has a significant role in the SRC scheme compared with the sparsity constraint. However, the sparsity constraint cannot be removed f