经典的强化学习算法主要应用于离散状态动作空间中.在复杂的学习环境下,离散空间的强化学习方法不能很好地满足实际需求,而常用的连续空间的方法最优策略的震荡幅度较大.针对连续空间下具有区间约束的连续动作空间的最优控制问题,提出了一种动作加权的行动者评论家算法(Action Weight Policy Search Actor Critic,AW-PS-AC).AW-PS-AC算法以行动者评论家为基本框架,对最优状态值函数和最优策略使用线性函数逼近器进行近似,通过梯度下降方法对一组值函数参数和两组策略参数进行更新.对两组策略参数进行加权获得最优策略,并对获得的最优动作通过区间进行约束,以防止动作越界.为了进一步提高算法的收敛速度,设计了一种改进的时间差分算法,即采用值函数的时间差分误差来更新最优策略,并引入了策略资格迹调整策略参数.为了证明算法的收敛性,在指定的假设条件下对AW-PS-AC算法的收敛性进行了分析.为了验证AW-PS-AC算法的有效性,在平衡杆和水洼世界实验中对AW-PS-AC算法进行仿真.实验结果表明AW-PS-AC算法在两个实验中均能有效求解连续空间中近似最优策略问题,并且与经典的连续动作空间算法相比,该算法具有收敛速度快和稳定性高的优点.
Classic reinforcement learning algorithms mainly aim at the discrete state and action spaces. For the complex environment or the more applicable continuous spaces, the methods for the discrete spaces cannot satisfy the requirement. One feasible method is to discretize the state and action spaces, then the methods applied in discrete spaces can solve these problems with continuous state and action spaces. However, the reasonable discretization for the state and action spaces is not an easy problem. The methods applicable in continuous spaces do not have to discretize the state or action spaces, but most of them did not consider the constraint of the action range, additionally, the fluctuations of the optimal action were heavily. To be more applicable in continuous action spaces, we propose an actor-critic algorithm for continuous action space based on weighting of the actions by considering the constraint of the action range and decreasing the fluctuation, called AW-PS-AC. AW-PS-AC is designed in the framework of the actor-critic which is a classic method for the continuous space. The action exploration policy takes the Gaussian distribute by using the optimal action as the mean value, so that the selective action is the action with a small exploration factor. The optimal state value function and the optimal policy are approximated by linear function approximation, where the gradient descent method is utilized to update one set of the value function parameter and two sets of the policy parameters. The two sets of the policy parameters are weighted to obtain the optimal policy to constraint the optimal action, so that the optimal action will not surpass the action range and the optimal policy will not fluctuate significantly. The weighting for the actions can satisfy the constraint of the action range. Moreover, the samples can be utilized more comprehensively, resulting in a better performance under only a small amount of the data. To speed the convergence rate, an improved temporal difference algorithm is designe