提出了一种结合人眼视觉特性的自适应PCNN图像融合新方法,使用图像逐像素的局部对比度做为PC-NN对应神经元的链接强度,经过PCNN点火获得参与融合图像的点火映射图,再通过判决选择算子,选择各参与融合图像中的明显特征部分生成融合图像.该方法除几个主要参数外,其它参数如阈值调整常量等对于融合结果影响很小,解决了PCNN用于图像处理时参数多且调整困难的问题.实验结果表明,融合效果优于经典的小波变换方法和Laplacian塔型方法.
This paper proposes a new fusion algorithm based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron has the same value and its value is chosen through experimentation, this algorithm uses the local contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, A, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experimental results indicate that the method outperforms the traditional approaches in preserving edge information while improving texture information.