该文基于贝叶斯分析的视角,揭示了一类算法,包括使用隐变量模型的稀疏贝叶斯学习(SBL),正则化FOCUSS算法以及Log-Sum算法之间的内在关联。分析显示,作为隐变量贝叶斯模型的一种,稀疏贝叶斯学习使用第2类最大似然(Type II ML)在隐变量空间进行运算,可以视作一种更为广义和灵活的方法,并且为不适定反问题的稀疏求解提供了改进的途径。较之于目前基于第1类最大似然(Type I ML)的稀疏方法,仿真实验证实了稀疏贝叶斯学习的优越性能。
From a Bayesian perspective, the commonly used sparse recovery algorithms, including Sparse Bayesian Learning(SBL), Regularized FOCUSS(R_FOCUSS) and Log-Sum, are compared. The analysis shows that, as a special case of latent variable Bayesian models, SBL, which operates in latent variable space via type-II maximum likelihood method, can be viewed as a more general and flexible means, and offers an avenue for improvement when finding sparse solutions to underdetermined inverse problems. Numerical results demonstrate the superior performance of SBL as compared to state-of-the-art sparse methods based on type-I maximum likelihood.