针对光学显微镜景深扩展中的多聚焦图像融合问题。提出了一种基于方向特性的新轮廓波域多聚焦图像融合算法。该算法首先对图像进行新轮廓波变换(NewContourletTransformwithSharpFrequencyLocalization,NCT—SFL),分解得到不同尺度、不同方向的高低频系数,低频系数融合使用算术平均法,高频系数融合分为两步:先采用改进拉普拉斯能量和(SumModifiedLaplacian,SML)提取特征值;然后定义新的与方向分解一一对应的椭圆方向窗,在确定的椭圆窗参数下,对提取的特征值进行累加并以此为依据对高频系数进行融合,最后通过反新轮廓波变换得到融合图像。在实验部分用定义的新的客观评价指标互结构信息(MutualStructuralInformation,MSI)对融合算法进行了评价,结果表明:对多聚焦图像本文所提方法比新轮廓波域方形窗算法MSI提高了2.94%.比Contourlet域方形窗与椭圆窗算法MSl分别提高了10.44%和8.56%。说明本文方法能提取源图像中更多的清晰信息到融合图像,是一种有效的景深扩展手段。
For the multifocus image fusion problem in extending depth of focus of optical microscope, this paper proposes a novel fusion method of multifocus images, which is based on directional statistics in new contourlet domain. Firstly, in order to obtain low-frequency and high-frequehcy coefficients in subbands with different scales and directions, the algorithm decomposed the muhifocus images by using the new contourlet transform. Secondly, the low-frequency coefficients were fused by arithmetic mean. High-frequency coefficients fusion was divided into two steps. The first step was extracting the characteristic value by using Sum Modified Laplacian(SML) , and then a new ellipse directional window was defined, which is corresponding to directional decomposition. Based on the determined parameters of directional window, accumulating the extracted characteristic value and using it as the basis for the high-frequency coefficients fusion. Finally, the fused image was obtained through inverse new contourlet transform. In the experimental section, fusion algorithms were evaluated by the new objective evaluation metric, Mutual Structural Information( MSI), which considers both statistical property between the fused image and source images and human visual characteristics. Experimental results show that for multifoeus image, com- pared to square window in the domain of the new contourlet transform, square window and ellipse directional window in the domain of the contourlet transform, mutual structural information of the proposed method 10.44% and 8.56% , respectively. The work illustrates that the proposed method can extract source images to fusion image. It is an effective means of extended depth of focus.