Neural network training is a process in which the parameters of a neural network are configured by simulating the environment in which this network is embedded. The type of training is determined by the method of adjusting the parameters. There are supervised and unsupervised learning algorithms.

Supervised learning assumes that for each input vector there is a target vector representing the required output. Together they are called a learning pair. Typically, the network is trained on a number of such training pairs. The network output is calculated and compared with the corresponding target vector. Further, the weights change in accordance with an algorithm that tends to minimize the error. The  vectors of training set are presented sequentially, errors are calculated and weights are adjusted for each vector until the error across the entire training set reaches an acceptable level.

Unsupervised learning is a much more plausible learning model in terms of the biological roots of artificial neural networks. Developed by Kohonen and many others, it does not need a target vector and, therefore, does not require comparison with predetermined ideal answers. The training set consists only of input vectors. The learning algorithm adjusts the weights of the network so that consistent output vectors are obtained, that is, the presentation of sufficiently close input vectors gives the same outputs. The learning process, therefore, highlights the statistical properties of the training set and groups similar vectors into classes.