神经采样是国际上最近提出的一种基于脉冲神经网络动力学的吉布斯采样算法,是一种有希望在类脑硬件上实现贝叶斯概率推断的算法.神经采样的仿生特点包括考虑神经元间通过发放脉冲来传递信息、突触后膜电压和迟滞效应等.该文首先会介绍国际上在神经采样方面已有的工作,分三小部分:第1部分涉及神经采样的抽象模型,包括其原理和在任意贝叶斯网络中采样的具体模型;第2部分涉及硬件实现,包括用累积发放(I&F)模型近似连续时间神经采样动力学的方法;第3部分通过结合前两部分,涉及用脉冲神经网络动力学训练传统的机器学习模型,并在经典计算机上模拟这个训练过程.第3部分具体包括基于脉冲时间的突触可塑性(STDP)的受限玻尔兹曼机(RBM)的事件驱动相对散度训练算法.最后,我们在训练RBM的相对散度和持续相对散度算法中,用神经采样替代传统的吉布斯采样.该文的工作首先分析了神经采样对初始化状态敏感和混合速度慢的采样特点,然后提出方法消除了这两个采样特点对训练的负面影响.在MNIST数据集上的实验初步显示,基于修正后的神经采样的训练算法能恢复跟传统基于吉布斯采样的算法相似的重构效果.目前在机器学习领域,基于概率的学习算法已发展成主流.神经采样方面的工作为在类脑硬件上实现低能耗的概率模型计算提供方法,未来有希望被用于提高移动设备的智能水平.
Neural sampling is a recently proposed Gibbs sampling algorithm implemented by spiking neural network dynamics, and is hopeful to be used as an algorithm to do Bayesian inference on neuromorphic platforms. The biologically plausibility of neural sampling is on that neurons use spikes to transmit information, post-synaptic potential and refractory effect, etc. This paper first introduces some existing works on neural sampling in three sub-parts. The first sub-part is about the abstract models of neural sampling, including its theory and a specific model applied to sample from arbitrary Bayesian networks. The second sub-part is about hardware implementations, including using a leaky integrate & fire(LIF)model to approximate continuous time neural sampling dynamics. The third sub-part is about using spiking neural network dynamics to train a regular machine learning model, and the corresponding simulation on sequential computers, by combining the materials mentioned in the first two sub-parts. The third sub-part includes an event-driven contrastive divergence (CD)to train restricted Boltzmann machines (RBMs)based on spike-timing dependent plasticity(STDP). Finally, our work is to use neural sampling to replace the Gibbs sampling in the regular CD and persistent CD(PCD)algorithm to train RBMs. The authors first analyze that neural sampling is sensitive to initialization and has slow mixing rate. And then we propose methods to fix these two disadvantages. Our preliminary experiments on MNIST dataset show that our training algorithms based on the revised neural sampling have similar reconstruction performance with those based on regular Gibbs sampling. Today, probabilistie methods have become a mainstream in machine learning. The emerging works on neural sampling provide approaches for implementing low-energy probabilistic computations on neuromorphic platforms. This is hopeful to enrich the intelligence on mobile devices in the near future.