针对多Agent强化学习研究中面临的非马尔可夫环境和维数灾难问题,提出了一种半马氏博弈模型和MAHRL(multi-agent hierarchical reinforcement learning)协同框架。该模型弱化了系统对外界环境的要求,引入了随机时间步和通信策略的概念,更符合MAHRL研究的实际情况;协同框架中分别用SMG和SMDP模型对不同子任务进行建模,明确了Agent之间的协同机制。通过实验证明了SMG模型和协同框架的有效性和优越性。
A kind of Semi-Markov Game model and collaborative framework is presented,which can resolve some problems of multi-agent reinforcement learning,such as the non-Markov nature,curse of dimensionality,etc.The model has weakened the requirements to the environment and imported the conception of the stochastic time step and communication strategy.The framework uses SMG and SMDP models to describe the different subtask respectively and defines the coordination mechanism among Agents.The experiment proves the validity and superiority of SMG model and collaborative framework.