针对多聚焦图像融合问题,提出一种基于相似性灰关联的Curvelet域可见光图像融合方法.该方法首先将待融合图像进行多级Curvelet分解,然后对各融合图像的高频系数进行分块,利用灰色理论中的灰色欧几里德关联度确定各子块间的相似性,并制定不同的高频系数融合策略,低频系数则采用算术平均法融合;最后,通过Curvelet逆变换重构融合图像.实验结果显示,该方法融合图像的信息熵、标准差和清晰度等指标优于金字塔融合法以及小波变换法等常见的多种融合方法.
Focusing on multi-focus image fusion, this paper presents a Curvelet domain method based on grey relation of similarity for visible-light images. In this method, source images are respectively decomposed by multilevel discrete Chuvelet Irans form first. Then, after high frequency coefficients were divided into small blocks,grey Euclid relational degrees of grey theory are used to compute the similarities of these blocks, on which the high-frequency coefficients are fused, while the arithmetic mean method is used to fuse the low-frequency coefficients. Finally, a fused image is reconstructed with the fused coefficients by perform- ing the inverse Curvelet Iransform. Experimental results show that the proposed method is superior to pyramid-based methods and wavelet-transform-based methods in terms of entropy, standard deviation and clarity.