针对构建大规模机器学习系统在可扩展性、算法收敛性能、运行效率等方面面临的问题,分析了大规模样本、模型和网络通信给机器学习系统带来的挑战和现有系统的应对方案。以隐含狄利克雷分布(LDA)模型为例,通过对比三款开源分布式LDA系统——Spark LDA、PLDA+和Light LDA,在系统资源消耗、算法收敛性能和可扩展性等方面的表现,分析各系统在设计、实现和性能上的差异。实验结果表明:面对小规模的样本集和模型,Light LDA与PLDA+的内存使用量约为Spark LDA的一半,系统收敛速度为Spark LDA的4至5倍;面对较大规模的样本集和模型,Light LDA的网络通信总量与系统收敛时间远小于PLDA+与Spark LDA,展现出良好的可扩展性。“数据并行+模型并行”的体系结构能有效应对大规模样本和模型的挑战;参数弱同步策略(SSP)、模型本地缓存机制和参数稀疏存储能有效降低网络开销,提升系统运行效率。
Aiming at the problems of scalability, algorithm convergence performance and operational efficiency in building large-scale machine learning systems, the challenges of the large-scale sample, model and network communication to the machine learning system were analyzed and the solutions of the existing systems were also presented. Taking Latent Diriehlet Allocation (LDA) model as an example, by comparing three open source distributed LDA systems--Spark LDA, PLDA + and LightLDA, the differences in system design, implementation and performance were analyzed in terms of system resource consumption, algorithm convergence performance and scalability. The experimental results show that the memory usage of LightLDA and PLDA + is about half of Spark LDA, and the convergence speed is 4 to 5 times of Spark LDA in the face of small sample sets and models. In the case of large-scale sample sets and models, the network communication volume and system convergence time of LightLDA is much smaller than PLDA + and SparkLDA, showing a good scalability. The model of "data parallelism + model parallelism" can effectively meet the challenge of large-scale sample and model. The mechanism of Stale Synchronous Parallel (SSP) model for parameters, local caching mechanism of model and sparse storage of parameter can reduce the network cost effectively and improve the system operation efficiency.