Progressive scientific optimists in Western societies are often heard to promote two main projects: that of nuclear fusion and of artificial intelligence. While billions of dollars have been invested in both of these areas and very little has happened, proponents are still claiming that a breakthrough is very near. To understand why the difficulties seem to be insurmountable just before reaching the goal, it is necessary to investigate some of the main thoughts and attitudes of the artificial intelligence, or the AI, area. However, as we shall see, this investigation will be less about computers than about the nature of consciousness and mind.
AI has a very straightforward goal: to augment and improve human mental capability by replacing human mental activity with the activity of a computer. In order to do this, the construction of an intelligence amplifier is a consequence of the development in the modern technological society with its existing power potential and overall complexity. While our physical strength has been multiplied several hundred times by the use of conventional machines, no such comparable growth of human mental capacity has taken place. We may somehow amplify our inborn intelligence sufficiently to give the overall comprehension that we so obviously lack in handling our dangerous strength. To do this, AI researchers state that it is necessary to find ways to formalize human cognitive processes in order to make them programmable.
Albeit a subarea in the discipline of computer science, AI has emerged as something more than an ordinary specialism. AI’s success, if any, would have profound philosophical, ethical and practical implications for the entire human society. Also, deep feelings are disturbed when the superiority of the human race is challenged. Today, we do accept to be beaten in mental arithmetic and eventually in chess by our ‘dumb’ apparatus, but our intelligence, beliefs and feelings, constituting our total human preeminence, these we generally want to reserve for ourselves.
A closer look at the area of AI indicates that we can find some strong and influential supporters, the functionalists, who believe that within a few years computers will be capable of doing everything a human mind can do. For them, thinking is simply a matter of information processing and there is no significant distinction between the way human beings think and the way machines think (except that computers do it faster). When the program algorithms managing the computer’s behaviour reach a certain critical point of complexity and function, intrinsic qualities such as free-will and consciousness will appear. Even human attributes, such as feeling pain, contentment and a sense of humour will emerge. Consequently, the statement that only brains within living individuals can become conscious is called ‘carbon chauvinism’ by the functionalists. A question which must be handled by functionalists is quite obvious: What will happen if machine intelligence does jump-start itself? How do we turn it off if we don’t like the result?
Another question is if the consciousness of AI could be engineered to be moral. Proponents of the area state that because such a consciousness does not have to fight for a biological existence, it will automatically be benevolent.
This school is also supported by several logicians and linguists who recognize that their specific area is in principle limited and therefore likely to be computable in the near future. In their view the mind is a machine operated on the basis of the known laws of physics and the human brain with all of its activities can be fully understood (by the human brain). Intelligence can be broken down into discrete modules with defined functions like perception, planning actions and executing actions. Electronic artefacts can then presumably be constructed to perform all these activities satisfactorily using programmed internal models.
Regarding the internal representation of the world, functionalists state that a richly articulated computer model poses no fundamental problems. The world is largely stable and can be sampled again and again by sensors. Relevant changes can be detected and added to the input when necessary.
A related faction includes the even stronger believers, the behaviourists who maintain that if a computer could be instructed to behave exactly like a conscious human being, then it would automatically assume the feelings of this creature. While we are still long way from this goal, the behaviourists claim that such mental qualities exist in today’s computers. That is, every computer, even the simplest mechanical one, which performs a fundamental logic sequence of operations, has a low- level mental quality. The difference between low or advanced mentality and the existence or not of a mind is only a question of complexity, or of the number of states and functions involved. Thus behaviourists put an equals sign between doing and being when they state that to behave seemingly consciously is also to be conscious. But behaviourists seem to forget that sophisticated programming alone is not enough to make a computer show human consciousness. Curiously, predictable behaviour is computer-like, randomness is human.
A conclusion which can be drawn from both the functionalist and the behaviourist views is that hardware is relatively unimportant. The software with its specific structure and algorithms is considered the critical part of the computer when they want to liberate information from the ballast of materiality. A consequence of this mentality is the disembodied self-hood and the imagination of achieving immortality by uploading ourselves into huge computer programs resting in cyberspace.
In the discussion regarding AI, the concepts of weak and strong artificial intelligence sometimes are used. Here weak AI is the argument that a computer can simulate the behaviour of human cognition, but lacks the possibility to actually experience mental states itself. Strong AI argues, to the contrary, that computers will be capable of cognitive mental states, that is, they are self-aware.
In contrast to these reductionist views of AI we have the nonbelievers who state that AI only exposes what genuine intelligence is not (“If it works, it’s not artificial intelligence!”). They oppose the notion that the mind can be reduced to a machine operated on the basis of the well- known laws of physics. The creation of a selfconscious artificial intelligence is yet impossible as the biological brain functions are not algorithmic and our present technique is algorithmic. Existing knowledge is not enough to explain the mechanism of the mind and its emergent intelligence. Some of the non-believers claim that a true understanding of the brain is impossible because any explanatory device must possess a structure of higher degree of complexity than is possessed by the object to be explained. “The brain has no brain inside itself to explain its own function”. In other words, humans can never completely understand their own brains. In the eyes of many non-believers AI proponents show a mediaeval mentality in their attempt at an almost alchemical translation of dead machinery into a thinking being.
A well-known counter argument founded on the proposition that simulated or artificial intelligence should really be of the same kind as natural, has been propounded by the Berkeley philosopher John Searle. It is a thought experiment called ‘the Chinese Room’ and supposes that a person sits in a closed room. This person cannot talk or write Chinese, nor does he understand anything of the language. His task is to receive Chinese sentences written on a paper through a slot in the wall. He then has to translate the text into English with the aid of an excellent dictionary containing exhaustive tables of Chinese ideographs. The translation is delivered though another slot in the wall to a receiving person outside of the room.
Although the translation is fairly good, the translator does not understand the meaning of the Chinese ideographs used, he only manipulates symbols according to a set of rules. As the action inside the room duplicates the function of a computer doing the same task, it is obvious that no real understanding, mental awareness or ‘thinking’ is present. The receiver has no idea of how the translation is arranged; so far as he is concerned, the whole process is a black box with a certain input and its corresponding output. In spite of this, the room arrangement shows every sign of having a very sophisticated, translating intelligence. It is therefore proper to say that the computer with its program serves as a model for human thought, not just a simulation.
Many non-believers hold AI enthusiasts responsible for clinging to the now outdated scientific belief in reductive analysis and mechanistic modelling and also for the belief that all the secrets of nature will one day be fully understood. They also state that AI researchers have forgotten their own starting point; to model intelligent behaviour, not to create intelligence.
Another argument against AI is that intelligence is defined in terms of living systems and is thus not applicable to non-living computers. Intelligence and knowledge are the results of biological functions of living systems with bodies, defined by autopoiesis, a quality not existing in computers. Living systems are capable of both self-replication, growth and repair. In the higher manifestations they have advanced nervous systems, part of which is the brain itself, managing individual physical existence. In a sense the whole of human body is an information system, where all the molecules and cells contribute either to building up a communication system, or to transmitting the signals which circulate through this system.
Furthermore, living intelligent beings are biochemical creatures, guided by the very important capacity of their feelings. In a sense, the basis of intelligence is emotions. All emotions are inextricably tied up with a body and its states; electromagnetic machines controlled by a given number of lines of code do not have bodies. Memory retrieval in living creatures is also associated not with logical processes, but with emotional experience. Also, our bodily experiences and intentions cannot be separated from our language. The physical origin of language and its connection to the physical world with its social relations makes the speaking computer a hopeless contradiction. The bodily activity with its elementary actions constitutes the root of intelligence and consciousness. Mental phenomena can only be fully understood in the context of an organism interacting with its environment. Mind cannot be understood without some sort of embodiment as we think with the whole body (the hand as an extension of the brain). The false idea of the disembodied mind has been the source for the metaphor of mind as a software program.
Therefore, attempts to get the computer to imitate the human cognitive system — to think for us — are in principle just as remarkable as to expect that the tools of a craftsman should do the job, not the craftsman himself. All this together makes AI programs end up in the same category as aircraft when compared with birds — imitations of a function but not of a process — not manipulating concepts but their physical correlations.
The definition of intelligence includes very essential social components; that a disembodied computer with no childhood, no cultural practice and no feelings should be defined as intelligent is both nonsense and a self-contradiction. The term artificial intelligence relates to a machine and has no relevance at all when making a comparison with human qualities. ‘Are machines intelligent?’ is therefore a completely irrelevant question of the same kind as ‘Are machines resistant to AIDS?’
Regarding the human brain, it is the most advanced information processing system hitherto known; adapted as it is to the unlimited variety of life, it is far too complex to be treated as, or to be replaced by, any kind of human artefacts. The range of problems it has to handle includes the infinity of life and the infinity of human reflections. A computer can only equal or replace the human mind in limited applications which involve procedural thinking and data processing.
What really differentiates men from machines is the human ability to handle language — to comprehend any one of an infinite number of possible expressions is something that cannot be expressed in mechanical terms. Another significant difference is that human beings have the ability to diagnose and correct their own limitations in a way that has no parallel in machines. This power of self-transcendence implies the move to another level — to see the shortcomings of the system. The machine can only work within the system itself, according to rules which cannot be changed on its own accord. Therefore, being unable to break the rules set by software, computers cannot be considered creative, implying a further important difference in comparison with the human brain. Creativity involves discontinuity and abrupt breaks from past patterns of thinking — something impossible for the computer which operates with continuity.
To be honest, a more realistic attitude seems now to be emerging among the new generation of AI researchers using the new breed of parallel computers. Intelligence is now to a lesser extent seen as a centralized, disembodied function, but rather as an epiphenomenon of the process to-be-in-the-world. “There is no mind without a body in the hard world”. Intelligence should thus be built through perceptual experience and learning, rather than by the implementation of an internal main model of the world. It is not a problem that could be encoded in rules, and hence software.
Consequently, robotics has become an important area of interest where the hope is to trace intelligence through the experience of touch, sight, sound and smell. Using artificial sense organs robots should be able to build their own internal model of the world. The problem is that both childhood and evolutionary history must be repeated and implemented if something called intelligence is to be replicated. Bodies and brains must evolve together like those of a living organism. Robotic scientists can show how primitive intelligent behaviour could emerge from cooperation between a number of simple, independent systems in a distributed control system. The principles are the following:
- Do simple things first
- Learn to do them perfectly
- Add new layers of activity over the simple tasks
- Do not change the simple things
- Learn to do the new layers as perfectly as the simple
An actual question within the field is what will happen if it is possible to start from early childhood with a complete storehouse of knowledge from the predecessors. Here the human brain may compete with an electronic memory which can be accessed a million times faster than human synapses, and can be downloaded to others with negligible time and costs.
No matter what is the preferred perspective on AI, a dramatic point will be attained when transistor density in the central processing unit of a computer reaches the human-brain equivalent. This in quantity terms equals a number of about one hundred billion, that is, the amount of neurons in the human brain. Today, a common processor chip (for example, the Intel 486) comprises 1.2 million transistors. With present trends in chip manufacture it is not unrealistic to envisage a transistor density of one hundred million on a single chip within ten years. A comparable development within parallel computer processing should make possible the use of a thousand processors, thus realizing the brain equivalent with a possible clock speed of several hundred GHz. Which AI prophesies will then be realized, we can only wait and see as the only predictable aspect of the future is its unpredictability.
Source: Skyttner Lars (2006), General Systems Theory: Problems, Perspectives, Practice, Wspc, 2nd Edition.