作为联接主义智能实现的典范,神经网络采用广泛互联的结构与有效的学习机制来模拟人脑信息处理的过程,是人工智能发展中的重要方法,也是当前类脑智能研究中的有效工具.在七十年的发展历程中,神经网络曾历经质疑、批判与冷落,同时也几度繁荣并取得了许多瞩目的成就.从20世纪40年代的M-P神经元和Hebb学习规则,到50年代的Hodykin-Huxley方程、感知器模型与自适应滤波器,再到60年代的自组织映射网络、神经认知机、自适应共振网络,许多神经计算模型都发展成为信号处理、计算机视觉、自然语言处理与优化计算等领域的经典方法,为该领域带来了里程碑式的影响.目前,模拟人脑复杂的层次化认知特点的深度学习已经成为类脑智能中的一个重要研究方向.通过增加网络层数所构造的"深层神经网络"使机器能够获得"抽象概念"能力,在诸多领域都取得了巨大的成功,又掀起了神经网络研究的一个新高潮.文中回顾了神经网络的发展历程,综述了其当前研究进展以及存在的问题,展望了未来神经网络的发展方向.
As a typical realization of connectionism intelligence,neural network,which tries to mimic the information processing patterns in the human brain by adopting broadly interconnected structures and effective learning mechanisms,is an important branch of artificial intelligence and also a useful tool in the research on brain-like intelligence at present.During the course of seventy years' development,it once received doubts,criticisms and ignorance,but also enjoyed prosperity and gained a lot of outstanding achievements.From the M-P neuron and Hebb learning rule developed in 1940 s,to the Hodykin-Huxley equation,perceptron model and adaptive filter developed in 1950 s,to the self-organizing mapping neural network,Neocognitron,adaptive resonance network in 1960 s,many neural computation models have become the classical methods in the field of signal processing,computer vision,natural language processing and optimization calculation.Currently,as a way to imitate the complex hierarchical cognition characteristic of human brain,deep learning brings an important trend for brain-like intelligence.With the increasing number of layers,deep neural network entitles machines the capability to capture"abstract concepts"and it has achieved great success in various fields,leading a new and advanced trend in neural network research.This paper recalls the development of neural network,summarizes the latest progress and existing problems considering neural network and points out its possible future directions.