针对求解部分可观察马尔可夫决策过程(POMDP)规划问题时遭遇的“维数诅咒”,该文提出了一种基于非负矩阵分解(NMF)更新规则的 POMDP 信念状态空间降维算法,分两步实现低误差高维降维。第1步,利用POMDP 的结构特性,将状态、观察和动作进行可分解表示,然后利用动态贝叶斯网络的条件独立对其转移函数进行分解压缩,并去除概率为零的取值,降低信念状态空间的稀疏性。第2步,采用信念状态空间值直接降维方法,使降维后求出的近似最优策略与原最优策略保持一致,使用NMF更新规则来更新信念状态空间,避免Krylov迭代,加快降维速度。该算法不仅保证降维前后值函数不发生改变,又保留了其分段线性凸特性。实验结果表明,该算法具有较低误差率和较高收敛性。
For the curse of dimensionality encountered in solving the planning in Partially Observable Markov Decision Processes (POMDP), this paper presents a novel approach to compress belief states space using Non-negative Matrix Factorization (NMF) updating rules, which reduces high dimensional belief states space by two steps. First, the algorithm adopts factored representations of states, observations and actions by exploiting the structure of factored POMDP, then decomposes and compresses transition functions by exploiting conditional independence of dynamic Bayesian network, and then removes the zero probability to lower the sparsity of belief states space. Second, it adopts value-directed compression approach to make the obtained approximate belief states after dimension reduction be consistent with the original optimal, and exploits NMF updating rules instead of Krylov iterations to accelerate the dimension reduction. The proposed algorithm not only guarantees the value function and reward function of the belief states unchanged after reducing dimensions, but also keeps the piecewise linear and convex property to compute the optimal policy by using dynamic programming. Experiments demonstrate that the proposed belief compression algorithm has lower error rates and higher convergence.