Parallel processing and neural networks

Today a significant part of the development of artificial intelligence is carried out within the area of neural nets and parallel processing. Researchers and medical neuroscientists within this area are often referred to as connectionists. They hold that mental functions such as cognition and learning depend upon the way in which neurons interconnect and communicate in the brain. For a long time it has been known that the parallel processing in the brain is self-organizing. Several subprocesses which together deal with a major task are executed simultaneously in different parts of the brain’s neural network. The main advantage here is the processing speed — completely superior to that of serial functioning.

The aim of constructing parallel computer networks consisting of artificial neurons is to approach something which functions in a manner similar to the real brain. These networks exist mostly as computer simulations, seldom as pieces of hardware. Experiments with neuron-like elements embodied in computers have been carried out for several years; this is now an established area on its own. The main advantage with computer neural nets is their capability to imitate (to a certain extent) brain plasticity. Thanks to this feature, interconnections between neurons in the brain are able to change all the time to meet the task at hand. In short, the brain constantly reconfigures and adapts itself.

The main component of an artificial neural network, the neuron, is linked to its neighbours through adjustable connections. Like real nervous systems, which learn by adjusting the strength of their synaptic connections, the artificial neural network learns by adjusting the weighting on its connections. The strength of a signal transmitted through a certain connection depends on the weight of that connection. Most neurons or units of a neural network consist of three main categories: input, hidden and output. Signals are sent from a layer of input units to a layer of output units. On their way, the signals pass through the hidden units, the function of which is to improve the computational power of the network. When any part of the input unit is activated, a pattern of activation is spread throughout the network. If the activation exceeds a certain threshold, each output unit sums up the arriving signals and switches itself on. See Figure 7.1.

The network learns by comparing a certain programmed input pattern of activity with the resulting output pattern of activity, a process called mapping. More advanced modes of mapping occur in recurrent networks, where activation patterns emerging in the hidden units are recirculated through the network. For certain input patterns the hidden pattern generated is sent back to the input units with a small delay, coinciding with the next input. In this way the network remembers previous input patterns and learns of relationships between different input patterns. See Figure 7.2.

Neural networks work best when the input data may be fuzzy, since they do not depend on clear-cut yes/no decisions. Their decisions are made according to a complex averaging out of all the input they receive. Some proponents of neural networks see them as learning machines in an evolutionary approach to artificial intelligence. The assumption is that the underlying system is of a relatively simple structure and that its complexity emanates from large numbers of learned interconnections. With better computers the pace of natural evolution could be surpassed thousands of times in the evolution of intelligence.

Figure 7.1 Components of an artificial neural network.

Figure 7.2 Principle of a recurrent network.

Biologists do not subscribe to this idea when they state that intelligent behaviour is the result of a basic built-in structure and not of learning. A small insect with a few hundred neurons in its brain is extremely structured and its seemingly intelligent behaviour is not the result of learning. Regarding evolution at machine speed, this must be considered impossible. The evolutionary cycle must evolve at the same speed as the real changes occur and not the speed at which internal changes work. The artificial computer network cannot evolve faster than any other system that has to adapt to changes in the environment.

It must also be considered very doubtful if detailed knowledge of neurological mechanisms will ever reveal the true nature of intelligence. Similarly, exact and detailed facts about a computer chip will never reveal secrets of its associated software.

Source: Skyttner Lars (2006), General Systems Theory: Problems, Perspectives, Practice, Wspc, 2nd Edition.

Leave a Reply

Your email address will not be published. Required fields are marked *