在已获得对同一目标场景的多个传感器观测图像的情况下,本文建立了一种基于概率模型的多尺度图像信息融合算法.其基本思想是:首先对每个传感器图像分别进行小波包多尺度分解变换,建立基于该传感器图像的塔式结构子图像集,并且在每个尺度上得到基于每个子图像像素的概率模型;然后在每个尺度上的对应像素处,基于来自不同传感器图像的多个对应像素值,利用最小二乘规则对多尺度概率模型中的参数进行估计;再后是根据贝叶斯规则对该像素处的像素值进行融合估计;最后通过利用小波包多尺度逆变换,获得目标场景基于多个传感器图像的融合估计结果.应用该算法我们对获得的可见光和红外两种传感器图像进行计算机仿真实验,结果表明,与相关的方法相比新算法更有效.
While there are multiple images obtained by different sensors to measure the same object scene, one new multiscale image fusion algorithm based on probabilisfic model is proposed. Its elementary idea is:Firstly,to decompose each sensor image into multiple subimages which compose a multiscale pyramid via wavelet packet transform, and to estabfish pixel-based subimage model on every level in the pyramid. Secondly, estimate the model parameters using least squares method based on the corresoinding pixel of each level of the sensor image. Then, based on the model, derive estimation of the true scene using maximum posterior method.Finally, we may obtain global fusion estimate with the object scene by applying orderly inverse wavelet packet transformation to every local fusion estimate fi'om each level of the pyramid. The result of fusing the visible image and infrared image shows that the proposed algorithm is valid.