研究两机器人高速搬运系统的作业分配问题.在系统的Markov决策过程(MDP)模型中,状态变量具有连续取值和离散取值的混杂性,状态空间复杂且存在“维数灾”问题,传统的数值优化难以进行.根据小脑模型关节控制器(CMAC)具有收敛速度快和适应性强的特点,运用该结构作为Q值函数的逼近器,并与Q学习和性能势概念相结合,给出了一种适用于平均或折扣性能准则的CMAC—Q学习优化算法.仿真结果说明,这种神经元动态规划方法比常规的Q学习算法具有节省存储空间,优化精度高和优化速度快的优势.
The task assignment of a high-speed handling system with two robots is studied in this paper. In the under- lying Markov decision process(MDP) model, the state variable is composed of both continuous and discrete values, and the state space is complex and suffers from the curse of dimensionality. Therefore, the traditional numerical optimization is prevented from successful application to this system. Since the cerebellar-model-articulation-controller(CMAC) has the advantages of fast convergence and desired adaptability, it is employed to approximate the Q-values in a CMAC-Q learning optimization algorithm for combining the concept of performance potential and Q-learning, and for unifying the average criteria with the discount criteria. Compared with the Q-learning, the proposed neuro-dynamic programming approach requires less memory, but provides higher learning speed and better optimization performance as shown in the simulations.