并行计算框架Spark缺乏有效缓存选择机制,不能自动识别并缓存高重用度数据;缓存替换算法采用LRU,度量方法不够细致,影响任务的执行效率.本文提出一种Spark框架自适应缓存管理策略(Self-Adaptive Cache Management,SACM),包括缓存自动选择算法(Selection)、并行缓存清理算法(Parallel Cache Cleanup,PCC)和权重缓存替换算法(Lowest Weight Replacement,LWR).其中,缓存自动选择算法通过分析任务的DAG(Directed Acyclic Graph)结构,识别重用的RDD并自动缓存.并行缓存清理算法异步清理无价值的RDD,提高集群内存利用率.权重替换算法通过权重值判定替换目标,避免重新计算复杂RDD产生的任务延时,保障资源瓶颈下的计算效率.实验表明:我们的策略提高了Spark的任务执行效率,并使内存资源得到有效利用.
As a parallel computation framework,Spark does not have a good strategy to select valuable RDD to cache in limited memory.When memory has been full load,Spark will discard the least recently used RDD while ignoring other factors such as the computation cost and so on.This paper proposed a self-adaptive cache management strategy(SACM),which comprised of automatic selection algorithm(Selection),parallel cache cleanup algorithm(PCC) and lowest weight replacement algorithm(LWR).Selection algorithm can seek valuable RDDs and cache their partitions to speed up data intensive computations.PCC clean-up the valueless RDD sasynchronously to improve memory utilization.LWR takes comprehensive consideration of the usage frequency of RDD,the RDD's computation cost,and the size of RDD.Experiment results show that Spark with our selection algorithm calculates faster than traditional Spark,parallel cleanup algorithm contributes to the improvement of memory utilization,and LWR shows better performance in limited memory.