传统的利用有限差分方法模拟地震波场需要耗费较大的机时.为了提高地震波场的模拟效率,采用GPU并行计算技术是一种非常好的方法.文章基于一阶应力-速度声波方程的交错网格有限差分法,采用分块策略,将一个地质模型分解成多个小规模的地质子块,每个子块交由一个线程块负责,并利用常数存储器、块内共享存储器和寄存器减少对全局存储器的访问,实现了波场模拟的GPU加速.单CPU和GPU/CPU下不同规模网格的波场模拟结果表明:利用GPU加速可以将模拟效率提高数倍.尤其是当模拟大规模网格且炮点个数较多时,可以更加显著的提升模拟效率.
The traditional finite difference method for wave modeling is usually a time-consuming work because in real applications more than one billion grid points are required.The GPU parallel computing technology is introduced into simulation of seismic wave propagation. I present a implementation using GPU to accelerate wave modeling based upon one-order stress-velocity acoustic wave equation with PML and staggered-grid.We decompose the entire geological model grid into small-scale subblock to avoid the restrict of limiting shared memory.Every thread block calculate the value on grid points in the corresponding subblock.The use of fast(but small)memories such as registers and the shared memory reduce the amount of data access from the global memory.The PML absorbing condition is applied in this paper.By setting the attenuation factor to 0within the calculation region,the PML region and the calculation region can be regarded as a whole region.We test the time for wave propagation of single shot respectively on CPU and GPU/CPU platform and the results shows that the time taken by GPU/CPU can be about several times lesser than CPU.The speedup will be more dramatically when the number of grid points and shots increase.Therefore,we can conclude that GPU is indeed a good approach for largescale simulation of seismic wave propagation.