针对图像GIST全局特征提取算法的计算任务,实现了CPU+GPU异构协同计算与优化:使用CPU完成图像量化、线性延拓等小计算量、不规则的数据运算,使用GPU完成滤波、Gabor特征提取、降维等计算密集、高度并行的数据运算。面向图像序列的计算扩展,在CPU端引入线程池技术,通过每个线程都绑定一个CUDA流处理一幅图像的方法,实现了多幅图像并发流处理和流内数据传输延时的隐藏;利用线程池技术提供线程预创建、资源预分配及根据资源消耗情况的线程数量动态增减等方法,提高了CPU对GPU计算资源的调度使用效率。实验结果表明,在保证同等精度的前提下,基于异构计算平台的图像GIST特征提取方法相比传统CPU平台达到8.35-9.31倍的加速比,在使用线程池之后算法处理图像序列数据时速度进一步提升10.0%-37.2%。
To extract the global feature of GIST, a heterogeneous CPU+GPU collaborative computing and optimization is firstly implemented: CPU is used to complete the tasks of small amount of calculations and irregular data operations, such as image quantization and linear extension, while using GPU to complete the tasks with compute-intensive and highly parallel data operations, such as filtering, Gabor feature extracting and dimension reducing. For processing image sequences,the thread pool technology is introduced on the CPU side. Through the use of each thread binding a CUDA stream for one image, the parallel stream computing for multiple images between CPU and GPU and the streaming data transmission delay hidden are achieved. Moreover thread pool technology also offers the methods of thread pre-creating, pre-allocating of resources and running thread number changing on resource, which can improve the computing efficiency of GPU scheduled by the CPU. Under the same computing accuracy, experiments show that GIST implementation on heterogeneous computing platforms for images reaches 8.35-9.31 times speedup of the running on traditional CPU platform, and has an upgrading rate of 10.0%-37.2% for image sequences data while using the thread pool.