Trends in Systems Theory

At a time when any novelty, however trivial, is hailed as being revolutionary, one is weary of using this label for scientific de- velopments. Miniskirts and long hair being called teenage revolution, and any new styling of automobiles or drug introduced by the pharmaceutical industry being so announced, the word is an advertising slogan hardly fit for serious consideration. It can, however, be used in a strictly technical sense, i.e., “scientific revolutions” can be identified by certain diagnostic criteria.

Following Kuhn (1962), a scientific revolution is defined by the appearance of new conceptual schemes or “paradigms.” These bring to the fore aspects which previously were not seen or perceived, or even suppressed in “normal” science, i.e., science generally accepted and practiced at the time. Hence there is a shift in the  problems noticed and investigated and a change of the rules of scientific practice, comparable to the switch in perceptual gestalten in psychological experiments, when, e.g., the same figure may be seen as two faces vs. cup, or as duck vs. rabbit. Understandably, in such critical phases emphasis is laid on philosophical analysis which is not felt necessary in periods of growth of “normal” science. The early versions of a new paradigm are mostly crude, solve few problems, and solutions given for individual problems are far from perfect. There is a profusion and competition of theories, each limited with respect to the number of problems covered, and elegant solution of those taken into account. Nevertheless, the new paradigm does cover new problems, especially those previously rejected as “metaphysical”. These criteria were derived by Kuhn from a study of the “classical” revolutions in physics and chemistry, but they are an excellent description of the changes brought about by organismic and systems concepts, and elucidate both their merits and limitations.

Especially and not surprisingly, systems theory comprises a number of approaches different in style and aims.

The system problem is essentially the problem of the limitations of analytical procedures in science. This used to be expressed by half- metaphysical statements, such as emergent evolution or “the whole is more than a sum of its parts,” but has a clear operational meaning. “Analytical procedure” means that an entity investigated be resolved into, and hence can be constituted or reconstituted from, the parts put together, these procedures being understood both in their material and conceptual sense. This is the basic principle of “classical” science, which can be circumscribed in different ways: resolution into isolable causal trains, seeking for “atomic” units in the various fields of science, etc. The progress of science has shown that these principles of classical science—first enunciated by Galileo and Descartes—are highly successful in a wide realm of phenomena.

Application of the analytical procedure depends on two conditions. The first is that interactions between “parts” be nonexistent or weak enough to be neglected for certain research purposes. Only under this condition, can the parts be “worked out,” actually, logically, and mathematically, and then be “put together.” The second condition is that the relations describing the behavior of parts be linear; only then is the condition of sum- mativity given, i.e., an equation describing the behavior of the total is of the same form as the equations describing the behavior of the parts; partial processes can be superimposed to obtain the total process, etc.

These conditions are not fulfilled in the entities called systems, i.e., consisting of parts “in interaction.” The prototype of their description is a set of simultaneous differential equations (pp. 55ff.), which are nonlinear in the general case. A system or “organized complexity” (p. 34) may be circumscribed by the existence of “strong interactions” (Rapoport, 1966) or interactions which are “nontrivial” (Simon, 1965), i.e., nonlinear. The methodological problem of systems theory, therefore, is to provide for problems which, compared with the analytical-summative ones of classical science, are of a more general nature.

As has been said, there are various approaches to deal with such problems. We intentionally use the somewhat loose expression “approaches” because they are logically inhomogeneous, represent different conceptual models, mathematical techniques, general points of view, etc.; they are, however, in accord in being “systems theories.” Leaving aside approaches in applied systems research, such as systems engineering, operational research, linear and nonlinear programming, etc., the more important approaches are as follows. (For a good survey, cf. Drischel, 1968).

Classical” system theory applies classical mathematics, i.e., calculus. Its aim is to state principles which apply to systems in general or defined subclasses (e.g., closed and open systems), to provide techniques for their investigation and description, and to apply these to concrete cases. Owing to the generality of such description, it may be stated that certain formal properties will apply to any entity qua system (or open system, or hierarchical system, etc.), even when its particular nature, parts, relations, etc., are unknown or not investigated. Examples include generalized principles of kinetics applicable, e.g., to populations of molecules or biological entities, i.e., to chemical and ecological systems; diffusion, such as diffusion equations in physical chepa- istry and in the spread of rumors; application of steady state and statistical mechanics models to traffic flow (Gazis, 1967); allometric analysis of biological and social systems.

Computerization and simulation. Sets of simultaneous differential equations as a way to “model” or define a system are, if linear, tiresome to solve even in the case of a few variables; if nonlinear, they are unsolvable except in special cases (Table 1.1)

For this reason, computers have opened a new approach in systems research; not only by way of facilitation of calculations which otherwise would exceed available time and energy and by replacement of mathematical ingenuity by routine procedures, but also by opening up fields where no mathematical theory or ways of solution exist. Thus systems far exceeding conventional mathematics can be computerized; on the other hand, actual laboratory experiment can be replaced by computer simulation, the model so developed then to be checked by experimental data. In such way, for example, B. Hess has calculated the fourteen- step reaction chain of glycolysis in the cell in a model of more than 100 nonlinear differential equations. Similar analyses are routine in economics, market research, etc.

Compartment theory. An aspect of systems which may be listed separately because of the high sophistication reached in the field is compartment theory (Rescigno and Segre, 1966), i.e., the system consists of subunits with certain boundary conditions between which transport processes take place. Such compartment systems may have, e.g., “catenary” or “mammillary” structure (chain of compartments or a central compartment communicating with a number of peripheral ones). Understandably, mathematical difficulties become prohibitive in the case of three- or multicompartment systems. Laplace transforms and introduction of net and graph theory make analysis possible.

Set theory. The general formal properties of systems, closed and open systems, etc., can be axiomatized in terms of set theory (Mesarovic, 1964; Maccia, 1966). In mathematical elegance this approach compares favorably with the cruder and more special formulations of “classical” system theory. The connections of axiomatized systems theory (or its present beginnings) with actual systems problems are somewhat tenuous.

Graph theory. Many systems problems concern structural or topologie properties of systems, rather than quantitative relations. Some approaches are available in this respect. Graph theory, especially the theory of directed graphs (digraphs), elaborates relational structures by representing them in a topological space. It has been applied to relational aspects of biology (Rashevsky, 1956, 1960; Rosen, 1960). Mathematically, it is connected with matrix algebra; modelwise, with compartment theory of systems containing partly “permeable” subsystems, and from here with the theory of open systems.

Net theory, in its turn, is connected with set, graph, compartment, etc., theories and is applied to such systems as nervous networks (e.g., Rapoport, 1949-50).

Cybernetics is a theory of control systems based on communica- tion (transfer of information) between system and environment and within the system, and control (feedback) of the system’s function in regard to environment. As mentioned and to be discussed further, the model is of wide application but should not be identified with “systems theory” in general. In biology and other basic sciences, the cybernetic model is apt to describe the formal structure of regulatory mechanisms, e.g., by block and flow diagrams. Thus the regulatory structure can be recognized, even when actual mechanisms remain unknown and undescribed, and the system is a “black box” defined only by input and output. For similar reasons, the same cybernetic scheme may apply to hydraulic, electric, physiological, etc., systems. The highly elaborate and sophisticated theory of servomechanism in technology has been applied to natural systems only in a limited extent (cf. Bayliss, 1966; Kalmus, 1966; Milsum, 1966).

Information theory, in the sense of Shannon and Weaver (1949), is based on the concept of information, defined by an expression isomorphic to negative entropy of thermodynamics. Hence the expectation that information may be used as measure of organization (cf. p. 42; Quastler, 1955). While information theory gained importance in communication engineering, its applications to science have remained rather unconvincing (E.N. Gilbert, 1966). The relationship between information and organization, information theory and thermodynamics, remains a major problem (cf. pp. 151ff.).

Theory of automata (see Minsky, 1967) is the theory of abstract automata, with input, output, possibly trial-and-error and learning. A general model is the Turing machine (1936). Expressed in the simplest way a Turing automaton is an abstract machine capable of imprinting (or deleting) “1” and “0” marks on a tape of infinite length. It can be shown that any process of whatever complexity can be simulated by a machine, if this process can be expressed in a finite number of logical operations. Whatever is possible logically (i.e., in an algorithmic symbolism) also can be construed—in principle, though of course by no means always in practice—by an automaton, i.e., an algorithmic machine.

Game theory (von Neumann and Morgenstern, 1947) is a different approach but may be ranged among systems sciences because it is concerned with the behavior of supposedly “rational” players to obtain maximal gains and minimal losses by appropriate strategies against the other player (or nature). Hence it concerns essentially a “system” of antagonistic “forces” with specifications.

Decision theory is a mathematical theory concerned with choices among alternatives.

Queuing theory concerns optimization of arrangements under conditions of crowding.

Inhomogeneous and incomplete as it is, confounding models (e.g.,open system, feedback circuit) with mathematical techniques (e.g., set, graph, game theory), such an enumeration is apt to show that there is an array of approaches to investigate systems, including powerful mathematical methods. The point to be reiterated is that problems previously not envisaged, not manageable, or considered as being beyond science or purely philosophical are progressively explored.

Naturally, an incongruence between model and reality often exists. There are highly elaborate and sophisticated mathematical models, but it remains dubious how they can be applied to the concrete case; there are fundamental problems for which no mathematical techniques are available. Disappointment of overextended expectations has occurred. Cybernetics, e.g., proved its impact not only in technology but in basic sciences, yielding models for concrete phenomena and bringing teleological phenomena—previously tabooed—into the range of scientifically legitimate problems; but it did not yield an all-embracing explanation or grand “world view,” being an extension rather than a replacement of the mechanistic view and machine theory (cf. Bronowski, 1964). Information theory, highly developed mathematically, proved disappointing in psychology and sociology. Game theory was hopefully applied to war and politics; but one hardly feels that it has led to an improvement of political decisions and the state of the world; a failure not unexpected when considering how little the powers that be resemble the “rational” players of game theory. Concepts and models of equilibrium, homeostasis, adjustment, etc., are suitable for the maintenance of systems, but inadequate for phenomena of change, differentiation, evolution, negentropy, production of improbable states, creativity, building- up of tensions, self-realization, emergence, etc.; as indeed Cannon realized when he acknowledged, beside homeostasis, a “heterostasis” including phenomena of the latter nature. The theory of open systems applies to a wide range of phenomena in biology (and technology), but a warning is necessary against its incautious expansion to fields for which its concepts are not made. Such limitations and lacunae are only what is to be expected in a field hardly older than twenty or thirty years. In the last resort, disappointment results from making what is a useful model in certain respects into some metaphysical reality and “nothing-but” philosophy, as has happened many times in intellectual history.

The advantages of mathematical models—unambiguity, possibility of strict deduction, verifiability by observed data—are well known. This does not mean that models formulated in ordinary language are to be despised or refused.

A verbal model is better than no model at all, or a model which, because it can be formulated mathematically, is forcibly imposed upon and falsifies reality. Theories of enormous influence such as psychoanalysis were unmathematical or, like the theory of selection, their impact far exceeded mathematical constructions which came only later and cover only partial aspects and a small fraction of empirical data.

Mathematics essentially means the existence of an algorithm which is much more precise than that of ordinary language. History of science attests that expression in ordinary language often preceded mathematical formulation, i.e., invention of an algorithm. Examples come easily to mind: the evolution from counting in words to Roman numerals (a semiverbal, clumsy, halfalgorithm) to Arabic notation with position value; equations, from verbal formulation to rudimentary symbolism handled with virtuosity (but difficult for us to follow) by Diophantus and other founders of algebra, to modern notation; theories like those of Darwin or of economics which only later found a (partial) mathematical formulation. It may be preferable first to have some nonmathematical model with its shortcomings but expressing some previously unnoticed aspect, hoping for future development of a suitable algorithm, than to start with premature mathematical models following known algorithms and, therefore, possibly restricting the field of vision. Many developments in molecular biology, theory of selection, cybernetics and other fields showed the blinding effects of what Kuhn calls “normal” science, i.e., monolithically accepted conceptual schemes.

Models in ordinary language therefore have their place in systems theory. The system idea retains its value even where it cannot be formulated mathematically, or remains a “guiding idea” rather than being a mathematical construct. For example, we may not have satisfactory system concepts in sociology; the mere insight that social entities are systems rather than sums of social atoms, or that history consists of systems (however ill defined) called civilizations obeying principles general to systems, implies a reorientation in the fields concerned.

As can be seen from the above survey, there are, within the “systems approach,” mechanistic and organismic trends and models, trying to master systems either by “analysis,” “linear (including circular) causality,” “automata,” or else by “wholeness,” “interaction,” “dynamics” (or what other words may be used to circumscribe the difference). While these models are not mutually exclusive and the same phenomena may even be approached by different models (e.g., “cybernetic” or “kinetic” concepts; cf. Locker, 1964), it can be asked which point of view is the more general and fundamental one. In general terms, this is a question to be put to the Turing machine as a general automaton.

One consideration to the point (not, so far as we have seen, treated in automata theory) is the problem of “immense” numbers. The fundamental statement of automata theory is that happenings that can be defined in a finite number of “words” can be realized by an automaton (e.g., a formal neural network after McCulloch and Pitts, or a Turing machine) (von Neumann, 1951). The question lies in the term “finite.” The automaton can, by definition, realize a finite series of events (however large), but not an infinite one. However, what if the number of steps required is “immense,” i.e., not infinite, but for example transcending the number of particles in the universe (estimated to be of the order 1080) or of events possible in the time span of the universe or some of its subunits (according to Elsasser’s, 1966, proposal, a number whose logarithm is a large number)? Such immense numbers appear in many system problems with ex- ponentials, factorials and other explosively increasing functions. They are encountered in systems even of a moderate number of components with strong (nonnegligible) interactions (cf. Ashby, 1964). To “map” them in a Turing machine, a tape of “immense” length would be required, i.e., one exceeding not only practical but physical limitations.

Consider, for a simple example, a directed graph of N points (Rapoport, 1959b). Between each pair an arrow may exist or may not exist (two possibilities). There are therefore 2(N<N-1) different ways to connect N points. If N is only 5, there are over a million ways to connect the points. With N = 20, the number of ways exceeds the estimated number of atoms in the universe. Similar problems arise, e.g., with possible connections between neurons (estimated of the order of 10 billion in the human brain) and with the genetic code (Repge, 1962). In the code, there is a minimum of 20 “words” (nucleotide triplets) spelling the twenty amino acids (actually 64); the code may contain some millions of units. This gives 201000000 possibilities. Supposing the La- placean spirit is to find out the functional value of every combination; he would have to make such number of probes, but there are only 1080 atoms and organisms in the universe. Let us presume (Repge, 1962) that 1030 cells are present on the earth at a certain point of time. Further assuming a new cell generation every minute would give, for an age of the earth of 15 billion years (1016 minutes), 1046 cells in total. To be sure to obtain a maximum number, 1020 life-bearing planets may be assumed. Then, in the whole universe, there certainly would be no more than 1066 living beings—which is a great number but far from being “immense.” The estimate can be made with different assumptions (e.g., number of possible proteins or enzymes) but with essentially the same result.

Again, according to Hart (1959), human invention can be conceived as new combinations of previously existing elements. If so, the opportunity for new inventions will increase roughly as a function of the number of possible permutations and combinations of available elements, which means that its increase will be a factorial of the number of elements. Then the rate of acceleration of social change is itself accelerating so that in many cases not a logarithmic but a log- log acceleration will be found in cultural change. Hart presents interesting curves showing that increases in human speed, in killing areas of weapons, in life expectation, etc., actually followed such expression, i.e., the rate of cultural growth is not exponential or compound interest, but is super-acceleration in the way of a log-log curve. In a general way, limits of automata will appear if regulation in a system is directed not against one or a limited number of disturbances, but against “arbitrary” disturbances, i.e., an indefinite number of situations that could not possibly have been “foreseen”; this is widely the case in embryonic (e.g., experiments of Driesch) and neural (e.g., experiments of Lashley) regulations. Regulation here results from interaction of many components (cf. discussion in Jeffries, 1951, pp. 32ff.). This, as von Neumann himself conceded, seems connected with the “self-restoring” tendencies of organismic as contrasted to technological systems; expressed in more modern terms, with their open-system nature which is not provided even in the abstract model of automaton such as a Turing machine.

It appears therefore that, as vitalists like Driesch have emphasized long ago, the mechanistic conception, even taken in the modern and generalized form of a Turing automaton, founders with regulations after “arbitrary” disturbances, and similarly in happenings where the number of steps required is “immense” in the sense indicated. Problems of realizability appear even apart from the paradoxes connected with infinite sets.

The above considerations pertain particularly to a concept or complex of concepts which indubitably is fundamental in the general theory of systems: that of hierarchic order. We presently “see” the universe as a tremendous hierarchy, from elementary particles to atomic nuclei, to atoms, molecules, high-molecular compounds, to the wealth of structures (electron and light- microscopic) between molecules and cells (Weiss, 1962b), to cells, organisms and beyond to supra-individual organizations. One attractive scheme of hierarchic order (there are others) is that of Boulding (Table 1.2). A similar hierarchy is found both in “structures” and in “functions.” In the last resort, structure (i.e., order of parts) and function (order of processes) may be the very same thing: in the physical world matter dissolves into a play of energies, and in the biological world structures are the expression of a flow of processes. At present, the system of physical laws relates mainly to the realm between atoms and molecules (and their summation in macrophysics), which obviously is a slice of a much broader spectrum. Laws of organization and organizational forces are insufficiently known in the subatomic and the supermolecular realms. There are inroads into both the subatomic world (high energy physics) and the supermolecular (physics of high molecular compounds); but these are apparently at the beginnings. This is shown, on the one hand, by the present confusion of elementary particles, on the other, by the present lack of physical understanding of structures seen under the electronmicroscope and the lack of a “grammar” of the genetic code (cf. p. 153).

A general theory of hierarchic order obviously will be a main stay of general systems theory. Principles of hierarchic order can be stated in verbal language (Koestler, 1967; in press); there are semimathematical ideas (Simon, 1965) connected with matrix theory, and formulations in terms of mathematical logic (Woodger, 1930-31). In graph theory hierarchic order is expressed by the “tree,” and relational aspects of hierarchies can be represented.in’ this way. But the problem is much broader and deeper: The question of hierarchic order is intimately connected with those of differentiation, evolution, and the measure of organization which does not seem to be expressed adequately in terms either of energetics (negative entropy) or of information theory (bits) (cf. pp. 150ff.). In the last resort, as mentioned, hierarchic order and dynamics may be the very same, as Koestler has nicely expressed in his simile of “The Tree and the Candle.”

Thus there is an array of system models, more or less progressed and elaborate. Certain concepts, models and principles of general systems theory, such as hierarchic order, progressive differentiation, feedback, systems characteristics defined by set and graph theory, etc., are applicable broadly to material, psychological and sociocultural systems; others, such as open system defined by the exchange of matter, are limited to certain subclasses. As practice in applied systems analysis shows, diverse system models will have to be applied according to the nature of the case and operational criteria.

NB.—This survey is impressionistic and intuitive with no claim for logical rigor. Higher levels as a rule presuppose lower ones (e.g. life phenomena those at the physico-chemical level, socio-cultural phenomena the level of human activity, etc.) ; but the relation of levels requires clarification in each case (cf. problems such as open system and genetic code as apparent prerequisites of “life”; relation of “conceptual” to “real” systems, etc.) . In this sense, the survey suggests both the limits of reductionism and the gaps in actual knowledge.

Source: Bertalanffy Ludwig Von (1969), General System Theory: Foundations, Development, Applications, George Braziller Inc.; Revised edition.

Leave a Reply

Your email address will not be published. Required fields are marked *