提出了一种基于GPU加速的自适应耦合变分模型.首先该模型基于Meyer能量极小化分解理论,采用自适应扩散张量模型对图像结构部分进行边缘增强;其次通过非局部平均滤波,实现了保持边缘及细小结构的同时对图像振荡部分进行去噪;然后利用GPU的通用计算功能对耦合变分模型进行并行处理,通过合并访问和共享内存策略相结合的手段使耦合变分算法效率大幅度提升.实验结果表明,在较好保护图像边缘信息、增强纹理特征的前提下,该算法的GPU处理速度是CPU串行处理的15倍.
This paper presents an adaptive coupled variation model based on graphic processing unit (GPU)acceleration. First,the model decomposes a given image as the sum of two components :geometric structure and oscillating pattern ac- cording to Meyer' s theory, and an adaptive variation model based on diffusion tensor is used to enhance the structure part. Second,a nonlocal means filter is used to remove noise in the oscillating part while preserving the image edges and fine details. Then the general purpose computation function of the GPU is used to process the coupled variation model in paral- lel ,and the coalesced access and shared memory combined strategy is used to greatly improve the algorithm performance. Experiment results show that the proposed model can preserve the image edge information and enhance the image texture feature well;and the processing speed of the GPU is at least 15 times faster than that of the CPU serial processing.