对智能体Q强化学习方法进行了扩展,讨论效用驱动的Markov强化学习问题。与单吸收状态相比,学习过程不再是状态驱动,而是效用驱动的。智能体的学习将不再与特定的目标状态相联系,而是最大化每步的平均期望收益,即最大化一定步数内的收益总和,因此学习结果是一个平均收益最大的最优循环。证明了多吸收状态下强化学习的收敛性,将栅格图像看作具有多个吸收状态的格子世界,测试了确定性环境下多吸收状态Q学习的有效性。
This paper puts forward an extended model of Q learning and discusses a utility-drive Markov reinforcement learning.Compared with learning algorithm with single absorbed states,the learning target is not a state but to maximize the averaged utilities of agent in each decision process.The learning result is always a circle which lets agent acquire maximal rewards.Convergence of Q-learning is proved and the simulations in image grids indicates the learning result is a circle.