该文将深度学习用于遥感图像融合,在训练深度网络时加入了结构风险最小化的损失函数,提出了一种基于深度支撑值学习网络的融合方法.为了避免图像融合过程中的信息损失,在传统卷积神经网络的基础上,取消了特征映射层的下采样过程,构建了深度支撑值学习网络(Deep Support Value Learning Networks,DSVL Nets),DSVL Nets网络模型包含5个隐藏层,每一层的基本结构由卷积层和线性层构成,该基本单元提供了一种多尺度、多方向、各向异性、非下采样的冗余变换,该模型在网络训练完毕之后,取出各卷积层和第5个隐藏层的线性层作为网络模型的输出层.输出层的各卷积层图像融合采用绝对值取大法,得到融合后的各卷积层图像;另外,将线性层图像分别在过完备字典上进行稀疏表示,并对稀疏系数采用绝对值取大法进行融合,得到融合后的线性层图像;最后将融合后的各卷积层和线性层图像重构得到结果图像.文中使用QuickBird和Geoeye卫星数据验证本文所提方法的有效性,实验结果表明,与PCA、AWLP、PN-TSSC和SVT算法相比较,该文所提方法的融合结果无论在主观视觉还是客观评价指标上均优于对比算法,较好地保持了图像的光谱信息和空间信息.
A novel method based on Deep Support Value Learning Networks(DSVL Nets)is proposed for fusion of remote sensing images.The loss function based on structural risk minimization is used in the training of deep learning network.In order to avoid the loss of information,we abandon the downsampling of feature mapping layer of traditional convolution neural network.The DSVL Nets contains five hidden layers,where each layer consists of convolution layer and linear layer.And each layer provides a redundant transform which is multi-scale,multi-direction,anisotropy and non-subsampled.All convolution layers and the fifth linear layer are regarded as the outputs of DSVL Nets.The convolution layers images are fused by abs-maximum model.Thelinear layer images are sparsely represented on overcomplete dictionary,and then the coefficients are fused by abs-maximum model.The fused convolution layers images and the linear layer image are reconstructed,and one can obtain the fused result image.Some experiments are taken on several QuickBird and Geoeye satellite datasets.Compared with PCA,AWLP,PN-TSSC and SVT,the experimental results show that the proposed method outperforms some related pan-sharpening approaches in both visual results and numerical guidelines,and reduces the distortion in both the spectral and spatial domain.