强化学习是一种Agent在与环境交互过程中,通过累计奖赏最大化来寻求最优策略的在线学习方法.由于在不稳定环境中,某一时刻的MDP模型在与Agent交互之后就发生了变化,导致基于稳定MDP模型传统的强化学习方法无法完成不稳定环境下的最优策略求解问题.针对不稳定环境下的策略求解问题,利用MDP分布对不稳定环境进行建模,提出一种基于公式集的策略搜索算法——FSPS.FSPS算法在学习过程中搜集所获得的历史样本信息,并对其进行特征信息的提取,利用这些特征信息来构造不同的用于动作选择的公式,采取策略搜索算法求解最优公式.在此基础之上,给出所求解策略的最优性边界,并从理论上证明了迁移到新MDP分布中策略的最优性主要依赖于MDP分布之间的距离以及所求解策略在原始MDP分布中的性能.最后,将FSPS算法用于经典的Markov Chain问题,实验结果表明,所求解的策略具有较好的性能.
As an online learning algorithm,reinforcement learning,which obtains the optimal policy with the maximum expected cumulative reward by interacting with the environment,is mostly based on the stationary Markov Decision Process(MDP) but however is unable to deal with problems of the non-stationary case because traditional reinforcement learning algorithms cannot be used to learn an optimal policy directly due to the failure of MDP model after the agent once interacts with the environment.Hereby,a novel policy search algorithm based on a formula set(FSPS),which is generated by features extracted from the collected historical sample trajectories,was proposed.The algorithm adopted the formula with the best performance as the optimal policy.The algorithm also took advantage of concept of transfer learning by transferred the learned policy between two similar MDP distributions,where the performance of the transferred policy mainly depends on the distance between two MDP distributions as well as the performance of the learned policy in the original MDP distribution.Simulation results on the Markov Chain problem show that the algorithm can solve the problem of the non-stationary case quite well.