将函数逼近用于强化学习是目前机器学习领域的一个新的研究热点.针对传统的基于查询表及函数逼近的Q(A)学习算法在大规模状态空间中收敛速度慢或者无法收敛的问题,提出一种基于线性函数逼近的离策略Q(A)算法.该算法通过引入重要性关联因子,在迭代次数逐步增长的过程中,使得在策略与离策略相统一,确保算法的收敛性.同时在保证在策略与离策略的样本数据一致性的前提下,对算法的收敛性给予理论证明.将文中提出的算法用于Baird反例、Mountain—Car及RandomWalk仿真平台,实验结果表明,该算法与传统的基于函数逼近的离策略算法相比,具有较好的收敛性;与传统的基于查询表的算法相比,具有更快的收敛速度,且对于状态空间的增长具有较强的鲁棒性.
Reinforcement Learning using Function Approximation is a new research hotpot in Machine Learning field in recent years. In allusion to the problem of non-convergence or slow convergence for the traditional Q(2,) algorithm based on Lookup-Table or Function Approximation in large state space, the paper proposed a novel off Policy QO,) algorithm based on Linear Function Approximation, which introduces associated importance factor, uses associated impor- tance factor to unify the on-policy and off-policy on sample data distribution in iteration process, and assures the convergence. Under the premise of sample data consistency, the paper gave the proof of the convergence for the algorithm. Using the proposed algorithm on Baird Counter- Example, Mountain-Car and Random Walk, the results show that, compared with the traditional off-policy algorithm based on Function Approximation, the algorithm can converge well, and compared with the traditional TD(λ) based on Lookup-Table, the algorithm has better convergence performance, faster convergence speed, and better robustness for state space growth.