通常利用POMDPs对在部分可观测的随机环境中决策的agents建模。针对完整POMDP的求解方法扩展能力弱的问题,提出把多元POMDP分解成多个受限制的POMDPs,然后独立求解每个模型,以获得值函数,并将这些受限制的POMDPs的值函数结合起来以便获得完整POMDP策略。该方法主要阐述识别与独立任务相关的状态变量的过程,以及如何构造被限制在单独任务上的模型。将该方法应用到两个不同规模的岩石采样问题中,实验结果表明,该方法能够获得很好的策略。
POMDPs are widely used to model agents acting in a stochastic environment under partial observability. Because the complete POMDP solvers have poor ability to scale up, this paper proposed to decompose a factored POMDP into a set of restricted POMDPs and solved each such model independently, acquiring a value function. And then, it used the combination of the value functions of the restricted POMDPs to form a policy for the complete POMDP. It mainly explained the process of identifying state variables that corresponded to independent tasks, and how to create a model restricted to a single task. Using this method on RockSample domain with two different size, experiment results show that this method can gain a good policy.