针对连续状态空间下的强化学习算法初始性能差及收敛速度慢的问题,提出利用自适应势函数塑造奖赏机制来改进强化学习算法。该机制通过额外的奖赏信号白适应地将模型知识传递给学习器,可以有效提高算法的初始性能及收敛速度。鉴于径向基函数(RBF)网络的优良性能及存在的问题,提出利用自适应归一化RBF(ANRBF)网络作为势函数来塑造奖赏。基于ANRBF网络提出了梯度下降(GD)版的强化学习算法——ANRBF-GD—Sarsa(λ)。从理论上分析了ANRBF—GD。Sarsa(λ)算法的收敛性,并通过实验验证了ANRBF—GD-Sarsa(λ)算法具有较好的初始性能及收敛速度。
In the reinforcement leaning tasks with continuous state spaces, the algorithms are usually facing the problems of ill initial performance and low convergence speed. In order to solve these problems, the potential function shaping re-ward mechanism was proposed to improve the reinforcement learning algorithms. This mechanism propagates model knowledge to the learner' adaptively in the form of the additional reward signal, so that the initial performance and con-vergence speed could be improved effectively. In view of the good performance and existing problems of the radial basis function (RBF) network, the adaptive normalized RBF (ANRBF) network was put forward to use as a potential function to generate the shaping rewards. A gradient descent (GD) algorithm named ANRBF-GD-Sarsa(λ) was proposed based on the ANRBF network. The convergence of ANRBF-GD-Sarsa(λ) algorithm was analyzed theoretically. Extensive experi-ments are conducted to show the good initial performance and high convergence speed of the proposed algorithm.