The main stages in the history of the study and application of artificial neural networks:

  • 1943 – W. McCulloch and W. Pitts formalize the concept of a neural network in a fundamental article on the logical calculus of ideas and nervous activity.
  • 1948 – N. Wiener, together with colleagues, publishes a work on cybernetics. The main idea is to present complex biological processes with mathematical models.
  • 1949 – D. Hebb offers the first learning algorithm.
  • In 1958, F. Rosenblatt invents a single-layer perceptron and demonstrates its ability to solve classification problems. Perceptron has gained popularity – it used for pattern recognition, weather forecasting, etc.
  • In 1960, Widrow, together with his student Hoff, based on the delta rules (Widrow formulas), developed Adalin, which immediately began to be used for prediction and adaptive control problems. Adaline (adaptive adder) is now a standard feature of many signal processing systems.
  • In 1963 at the Institute for Information Transmission Problems of the Academy of Sciences of the USSR. A. P. Petrov carried out a detailed study of the tasks “difficult” for the perceptron.
  • In 1969, M. Minsky published a formal proof of the perceptron limitations and showed that the perceptron is unable to solve some problems associated with the invariance of representations. Interest in neural networks drops sharply.
  • In 1972, T. Kohonen and J. Anderson independently propose a new type of neural networks capable of functioning as a memory.
  • In 1973, B. V. Khakimov proposed a nonlinear model with spline-based synapses and introduced it to solve problems in medicine, geology, and ecology.
  • 1974 – Paul J. Verbos and A.I. Galushkin at the same time invent an error propagation algorithm for teaching multilayer perceptrons
  • 1975 – Fukushima represents a Cognitron – a self-organizing network designed for invariant pattern recognition, but this is achieved only by memorizing almost all the states of an image.
  • 1982 – after a period of oblivion, interest in neural networks is increasing again. J. Hopfield showed that a neural network with feedback is a system that minimizes energy (the so-called Hopfield network). Kohonen presents models of a unsupervised learning network (Kohonen’s neural network), solves the problems of clustering, data visualization (Kohonen’s self-organizing map) and other problems of preliminary data analysis.
  • 1986 – David I. Rumelhart, J. E. Hinton and Ronald J. Williams and at the same time c. S. I. Bartsev and V. A. Okhonin (Krasnoyarsk Group) rediscovered and developed the backpropagation method. An explosion of interest in the trained neural networks began.
  • 2007 Jeffrey Hinton at the University of Toronto created algorithms for deep learning of multilayer neural networks. The success is because Hinton used a limited Boltzmann Machine (RBM – Restricted Boltzmann Machine) to train the lower layers of the network.