针对传统的基于查询表或函数逼近的Q值迭代算法在处理连续空间问题时收敛速度慢、且不易求解连续行为策略的问题,提出了一种基于两层模糊划分的在策略时间差分算法——DFP-OPTD,并从理论上分析其收敛性。算法中第一层模糊划分作用于状态空间,第二层模糊划分作用于动作空间,并结合两层模糊划分计算出Q值函数。根据所得的Q值函数,使用梯度下降方法更新模糊规则中的后件参数。将DFP-OPTD应用于经典强化学习问题中,实验结果表明,该算法有较好的收敛性能,且可以求解连续行为策略。
When dealing with the continuous space problems, the traditional Qiteration algorithms based on lookuptable or function approximation converge slowly and are difficult to get a continuous policy. To overcome the above weak nesses, an onpolicy TD algorithm named DFPOPTD was proposed based on doublelayer fuzzy partitioning and its convergence was proved. The first layer of fuzzy partitioning was applied for state space, the second layer of fuzzy parti tioning was applied for action space, and Qvalue functions were computed by the combination of the two layer fuzzy partitioning. Based on the Qvalue function, the consequent parameters of fuzzy rules were updated by gradient descent method. Applying DFPOPTD on two classical reinforcement leaming problems, experimental results show that the algo rithm not only can be used to get a continuous action policy, but also has a better convergence performance.