提出了一种基于非下采样双树复轮廓波变换(NSDTCT)和稀疏表示的红外和可见光图像融合方法,以改善传统的基于小波变换的图像融合方法的不足。该方法首先利用形态学变换处理源图像,利用NSDTCT变换进行图像分解得到低频子带系数和高频子带系数。根据高低频系数的不同特点,提出改进的稀疏表示(ISR)的融合规则用于低频子带;然后将改进的空间频率作为脉冲耦合神经网络的外部输入,提出基于自适应双通道脉冲耦合神经网络(2APCNN)的融合策略用于高频子带。最后通过NSDTCT逆变换获得融合后的图像。实验结果表明:本文方法在客观指标和视觉效果方面均优于传统图像融合的方法。与传统的NSCT-SR方法相比,实验的两组图像中4个客观指标:互信息(MI)、边缘信息保留量QAB/F,平均梯度(AG)和标准差(SD)分别提高了9.89%、6.39%、104.64%、55.09%和9.53%、17.77%、95.66%、52.89%。
A novel fusion method of infrared and visible images was proposed based on Non- subsampled Dual-tree Complex Contourlet Transform (NSDTCT) and sparse representation to overcome the shortcomings of traditional image fusion method based on wavelet transform. With the proposed method, morphological transform was used to deal with source images, and then the source images were decomposed by the NSDTCT to obtain the low frequency sub-band coefficients and high frequency sub-band coefficients. According to the different characteristics of the low and high frequency coefficients, an Improved Sparse Representation (ISR) fusion rule was proposed for the low frequency sub-bands; Then, the improved spatial frequency was used as the external input of a pulse coupled neural network, and a fusion method based on the improved adaptive dual channel pulsecoupled neural network (2APCNN) was presented for the high frequency sub-bands. Finally, the fused image was obtained by performing the inverse NSDTCT. Experimental results indicate that the proposed method outperforms the conventional image fusion methods in terms of both objective evaluation criteria and visual quality. As compared with conventional NSCT-SR method, the fusion quality indexes, Mutual Information(MI), Mount of edge Information(QAB/F), Average Gradient (AG) and Standard Deviation(SD) have increased by 9. 89%, 6. 39%, 104. 64%, 55. 09%, and 9.53%, 17.77%, 95.66%, 52.89%, respectively.