栅格地理计算并行编程库的研发有助于实现对栅格地理计算算法的并行化。在现有的研究中,Qin等(2014)设计并初步研发的栅格地理计算并行算子(PaRGO),在设计思路上能较好地隐藏与并行编程软硬件环境相关的复杂细节,实现栅格地理计算通用步骤的并行化,且较其他类似思路的编程库而言,PaRGO能兼容多种常用的并行计算平台,具有明显优势。但PaRGO目前在设计上仅直接支持本地、邻域及全局计算特点的栅格地理计算算法并行化,对于更为复杂的区域计算特点算法并行化的支持能力尚未探究。对此,本文选取栅格数字地形分析中具有区域计算特点、递归设计的多流向算法为算例,利用PaRGO进行并行化设计、实现及测试,以计算时间、相对加速比和相对并行效率为定量指标。通过可运行性和并行性能进行评价,结果表明:PaRGO虽然不能直接支持对递归的多流向算法进行并行化,但在根据多流向计算的原理将该递归算法转变为非递归的设计之后,可将算法由原区域计算改造为邻域迭代计算,就能利用PaRGO实现并行化,并得到较好的并行效果。在集群环境下,MPI版本并行程序的并行效果优于MPI/OpenMP混合版本。
Parallel raster-based programming libraries have been proposed to make the details of parallel pro- gramming and the parallel hardware architecture to be transparent to users in some degrees. Thus these libraries can facilitate the development of parallel programs of raster-based geocomputation. Among the existing parallel programming libraries, parallel raster-based geocomputation operators (PaRGO), which is recently proposed by Qin et al, shows great advantages. This is not only because PaRGO encapsulates the general steps in parallel ras- ter-based geocomputation, but also because PaRGO is compatible with multiple commonly used parallel comput- ing platforms. Currently, PaRGO is designed for supporting local operation, focal operation and global operation directly. However, the availability of PaRGO for supporting regional operation in raster-based geocomputation has not been evaluated. In this paper, we evaluate PaRGO to testify its performance in this circumstance by using a multiple-flow-direction algorithm as a representation of the regional operation. Different versions of PaRGO- based parallel programs for this algorithm are tested on a symmetrical multiprocessing (SMP) cluster and evalu- ated from two aspects: the performability and the parallel efficiency. The experimental results show that the cur- rent PaRGO cannot directly support the parallelization of regional operations. But it can be supportive when the regional operation is transformed into an iteration process of focal operation. On a SMP cluster, MPI-version par- allel program performs better than MPI/OpenMP-version parallel program.