针对大数据体量大的问题,在Macro-Q算法的基础上提出了一种在线更新的Macro-Q算法(MQIU),同时更新抽象动作的值函数和元动作的值函数,提高了数据样本的利用率。针对传统的马尔可夫过程模型和抽象动作均难于应对可变性,引入中断机制,提出了一种可中断抽象动作的Macro-Q无模型学习算法(IMQ),能在动态环境下学习并改进控制策略。仿真结果验证了MQIU算法能加快算法收敛速度,进而能解决更大规模的问题,同时也验证了IMQ算法能够加快任务的求解,并保持学习性能的稳定性。
Aiming at dealing with volume of big data, an on-line updating algorithm, named by Macro-Q with in-place updating(MQIU), which was based on Macro-Q algorithm and takes advantage of in-place updating approach, was proposed. The MQIU algorithm updates both the value function of abstract action and the value function of primitive action, and hence speeds up the convergence rate. By introducing the interruption mechanism, a model-free interrupting Macro-Q Option learning algorithm(IMQ), which was based on hierarchical reinforcement learning, was also introduced to order to handle the variability which was hard to process by the conventional Markov decision process model and abstract action so that IMQ was able to learn and improve control strategies in a dynamic environment. Simulations verify the MQIU algorithm speeds up the convergence rate so that it is able to do with the larger scale of data, and the IMQ algorithm solves the task faster with a stable learning performance.