强化学习领域的一个研究难点是在大规模或连续空间中平衡探索和利用的问题。针对该问题,应用函数近似与高斯过程方法,提出新的行动者评论家(actor-critic,AC)算法。该算法在actor中使用时间差分误差构造关于策略参数的更新公式;在critic中利用高斯过程对线性带参值函数建模,结合生成模型,根据贝叶斯推理求解值函数的后验分布。将该算法应用于平衡杆实验中,实验结果表明,算法收敛速度较快,可以有效解决在大规模或连续空间中探索和利用的平衡问题,具有较好的性能。
The problem of how to balance the exploration and exploitation in the large or continuous state space is a hot topic in the field of reinforcement learning. With respect to this problem,this paper presented a novel actor-critic algorithm which combined with function approximation method and Gaussian process method. In the terms of actor,the algorithm used the temporal difference error to construct a mean square error function with respect to the policy parameters. In the terms of critic,the algorithm used Gaussian process to model the linear state-value function,and in conjunction with generative model,obtained the posteriori distribution of the parameter vector of the state-value function by Bayesian inference. The experimental results on the balance pole experiment shows that the algorithm has faster convergence rate and achieves the balance between exploration and exploitation in the large or continuous state space effectively. The algorithm has good convergence performance.