针对强化学习(RL)中状态空间过大所引起的学习时间过长或算法难于收敛等问题,提出了一种基于矢量量化(VQ)技术的表格型强化学习方法——VQRL方法,该方法用矢量量化器的码书矢量来逼近强化学习的状态空间,从而有效地解决了强化学习的状态空间分割问题,并提高了学习的收敛速度。同时根据等失真理论将一种失真敏感自组织特征映射(SOFM)神经网络用于矢量量化,以达到更好的强化学习状态空间泛化性能。将此方法应用于反应式移动机器人的行为学习的实验验证了此方法的有效性,实验表明,此方法能够较好地解决复杂未知环境的机器人导航问题。
Considering that in the course of reinforcement learning ( RL), the too large state space causes the problems of long time learning and difficulty in the learning algorithm's convergence, the paper proposes the VQRL method, a Lookup- Table reinforcement learning method based on vector quantization (VQ). The proposed method utilizes the codebook of vector quantization to approximate the continuous state space of reinforcement learning, which solves the partition state space problem of RL and improves the speed of convergence effectively. And based on the equal distortion theory, it uses a distortion sensitive self-organizing feature map (SOFM) to quantize vectors. Therefore, the favorable generalization performance of state space can be obtained. The proposed method was used for learning the behavior of a reactive robot. The experiments showed the effectiveness of the presented algorithm. It can effectively solve the navigation problems under complicated unknown environments.