Methods in General Systems Research

Ashby (1958a) has admirably outlined two possible ways or general methods in systems study:

Two main lines are readily distinguished. One, already well developed in the hands of von Bertalanffy and his co-workers, takes the world as we find it, examines the various systems that occur in it— zoological, physiological, and so on—and then draws up statements about the regularities that have been observed to hold. This method is essentially empirical. The second method is to start at the other end. Instead of studying first one system, then a second, then a third, and so on, it goes to the other extreme, considers the set of all conceivable systems and then reduces the set to a more reasonable size. This is the method I have recently followed.

It will easily be seen that all systems studies follow one or the other of these methods or a combination of both. Each of the approaches has its advantages as well as shortcomings.

(1) The first method is empirico-intuitive; it has the advantage that it remains rather close to reality and can easily be illustrated and even verified by examples taken from the individual fields of science. On the other hand, the approach lacks mathematical elegance and deductive strength and, to the mathematically minded, will appear naive and Nevertheless, the merits of this empirico-intuitive procedure should not be minimized.

The present writer has stated a number of “system principles,” partly in the context of biological theory and without explicit reference to G.S.T. (von Bertalanffy, 1960a, pp. 37-54), partly in what emphatically was entitled an “Outline” of this theory (Chapter 3). This was meant in the literal sense: It was intended to call attention to the desirability of such a field, and the presentation was in the way of a sketch or blueprint, illustrating the approach by simple examples.

However, it turned out that this intuitive survey appears to be remarkably complete. The main principles offered such as wholeness, sum, centralization, differentiation, leading part, closed and open system, finality, equifinality, growth in time, relative growth, competition, have been used in manifold ways (e.g., general definition of system: Hall and Fagen, 1956; types of growth: Keiter, 1951-52; systems engineering: A.D. Hall, 1962; social work: Hearn, 1958). Excepting minor variations in terminology intended for clarification or due to the subject matter, no principles of similar significance were added—even though this would be highly desirable. It is perhaps even more significant that this also applies to considerations which do not refer to the present writer’s work and hence cannot be said to be unduly influenced by it. Perusal of studies such as those by Beer (1960) and Kremyanskiy (1960) on principles, Bradley and Calvin (1956) on the network of chemical reactions, Haire (1959) on growth of organizations, etc., will easily show that they are also using the “Bertalanffy principles.”

(2) The way of deductive systems theory was followed by Ashby (1958b). A more informal presentation which summarizes Ashby’s reasoning (1962) lends itself particularly well to analysis.

Ashby asks about the “fundamental concept of machine” and answers the question by stating “that its internal state, and the state of its surroundings, defines uniquely the next state it will go to.” If the variables are continuous, this definition corresponds to the description of a dynamic system by a set of ordinary differential equations with time as the independent variable. However, such representation by differential equations is too restricted for a theory to include biological systems and calculating machines where discontinuities are ubiquitous. Therefore the modern definition is the “machine with input”: It is defined by a set S of internal states, a set I of input and a mapping ƒ of the product set I X S into S “Organization,” then, is defined by specifying the machine’s states S and its conditions I. If S is a product set , with i as the parts and T specified by the mapping ƒ, a “self-organizing” system, according to Ashby, can have two meanings, namely: (1) The system starts with its parts separate, and these parts then change toward forming connections (example: cells of the embryo, first having little or no effect on one another, join by formation of dendrites and synapses to form the highly interdependent nervous system). This first meaning is “changing from unorganized to organized.” (2) The second meaning is “changing from a bad organization to a good one” (examples: a child whose brain organization makes it fire-seeking at first, while a new brain organization makes him fire-avoiding; an automatic pilot and plane coupled first by deleterious positive feedback and then improved). “There the organization is bad. The system would be ‘self-organizing’ if a change were automati- cally made” (changing positive into negative feedback). But “no machine can be self-organizing in this sense” (author’s italics). For adaptation (e.g., of the homeostat or in a self-programming computer) means that we start with a set S of states, and that ƒ changes into g, so that organization is a variable, e.g., a function of time a (t) which has first the value ƒ and later the value g.

However, this change “cannot be ascribed to any cause in the set S; so it must come from some outside agent, acting on the system S as input” (our italics). In other terms, to be “selforganizing” the machine S must be coupled to another machine.

This concise statement permits observation of the limitations of this approach. We completely agree that description by differential equations is not only a clumsy but, in principle, inadequate way to deal with many problems of organization. The author was well aware of this, emphasizing that a system of simultaneous differential equations is by no means the most general formulation and is chosen only for illustrative purposes (Chapter 3).

However, in overcoming this limitation, Ashby introduced another one. His “modern definition” of system as a “machine with input” as reproduced above, supplants the general system model by another rather special one: the cybernetic model—i.e., a system open to information but closed with respect to entropy transfer. This becomes apparent when the definition is applied to “self-organizing systems.” Characteristically, the most important kind of these has no place in Ashby’s model, namely systems organizing themselves by way of progressive differentiation, evolving from states of lower to states of higher complexity. This is, of course, the most obvious form of “self-organization,” apparent in ontogenesis, probable in phylogenesis, and certainly also valid in many social organizations. We have here not a question of “good” (i.e., useful, adaptive) or “bad” organization which, as Ashby correctly emphasizes, is relative to circumstances; increase in differentiation and complexity— whether useful or not—is a criterion that is objective and at least in principle amenable to measurement (e.g., in terms of decreasing entropy, of information). Ashby’s contention that “no machine can be self- organizing,” more explicitly, that the “change cannot be ascribed to any cause in the set 5” but “must come from some outside agent, an input” amounts to exclusion of self-differentiating systems. The reason that such systems are not permitted as “Ashby machines” is patent. Self- differentiating systems that evolve toward higher complexity (decreasing entropy) are, for thermodynamic reasons, possible only as open systems— e.g., systems importing matter containing free energy to an amount overcompensating the increase in entropy due to irreversible processes within the system (“import of negative entropy” in Schrôdinger’s expression). However, we cannot say that “this change comes from some outside agent, an input”; the differentiation within a developing embryo and organism is due to its internal laws of organization, and the input (e.g., oxygen supply which may vary quantitatively, or nutrition which can vary qualitatively within a broad spectrum) makes it only possible energetically.

The above is further illustrated by additional examples given by Ashby. Suppose a digital computer is carrying through multiplications at random; then the machine will “evolve” toward showing even numbers (because products even X even as well as even X odd give numbers even), and eventually only zeros will be “surviving.” In still another version Ashby quotes Shannon’s Tenth Theorem, stating that if a correction channel has capacity H, equivocation of the amount H can be removed, but no more. Both examples illustrate the working of closed systems: The “evolution” of the computer is one toward disappearance of differentiation and establishment of maximum homogeneity (analog to the second principle in closed systems); Shannon’s Theorem similarly concerns closed systems where no negative entropy is fed in. Compared to the information content (organization) of a living system, the imported matter (nutrition, etc.) carries not information but “noise.” Nevertheless, its negative entropy is used to maintain or even to increase the information content of the system. This is a state of affairs apparently not provided for in Shannon’s Tenth Theorem, and understandably so as he is not treating information transfer in open systems with transformation of matter.

In both respects, the living organism (and other behavioral and social systems) is not an Ashby machine because it evolves toward increasing differentiation and inhomogeneity, and can correct “noise” to a higher degree than an inanimate communication channel. Both, however, are consequences of the organism’s character as an open system.

Incidentally, it is for similar reasons that we cannot replace the concept of “system” by the generalized “machine” concept of Ashby. Even though the latter is more liberal compared to the classic one (machines defined as systems with fixed arrangement of parts and processes), the objections against a “machine theory” of life (von Bertalanffy, 1960, pp. 16-20 and elsewhere) remain valid.

These remarks are not intended as adverse criticism of Ashby’s or the deductive approach in general; they only emphasize that there is no royal road to general systems theory. As every other scientific field, it will have to develop by an interplay of empirical, intuitive and deductive procedures. If the intuitive approach leaves much to be desired in logical rigor and completeness, the deductive approach faces the difficulty of whether the fundamental terms are correctly chosen. This is not a particular fault of the theory or of the workers concerned but a rather common phenomenon in the history of science; one may, for example, remember the long debate as to what magnitude—force or energy —is to be considered as constant in physical transformations until the issue was decided in favor of mv2/2.

In the present writer’s mind, G.S.T. was conceived as a working hypothesis; being a practicing scientist, he sees the main function of theoretical models in the explanation, prediction and control of hitherto unexplored phenomena. Others may, with equal right, emphasize the importance of axiomatic approach and quote to this effect examples like the theory of probability, non-Euclidean geometries, more recently information and game theory, which were first developed as deductive mathematical fields and later applied in physics or other sciences. There should be no quarrel about this point. The danger, in both approaches, is to consider too early the theoretical model as being closed and definitive—a danger particularly important in a field like general systems which is still groping to find its correct foundations.

Source: Bertalanffy Ludwig Von (1969), General System Theory: Foundations, Development, Applications, George Braziller Inc.; Revised edition.

Leave a Reply

Your email address will not be published. Required fields are marked *